AI Champion
Why AI Adoption is about letting your ego die
Most writing about AI adoption is written from a distance. A consultant’s lens, a leadership team’s dashboard, a clean framework and a failure stat.
This piece is a written conversation between Dylan Oh and Anna Levitt about what AI adoption looks like from the inside of a team. Dylan’s original answers and voice were preserved throughout and lightly edited for structure, along with Anna’s observations.
Dylan Oh is a software engineer and AI champion at a global betting technology company, over 1,000 employees, offices across multiple regions, Singapore as his base. He was asked to be the AI champion because he was already the loudest voice on AI before anyone made it official.
What he describes is not a success story in the tidy sense. It’s something more useful, an honest account of what happens when tools arrive inside a team of people who are already stretched, proud of their craft, and quietly afraid of what AI might mean for it.
I’ve been writing about this pattern for a while now. AI adoption fails not because of the technology but because of what the technology touches. The professional identity of people who have spent years being the person who knows.
The unspoken question underneath: if the tool can do this, what exactly am I for?
How it started
The rollout was top-down. Tools announced, training sessions scheduled, a governance team stood up to manage licensing. Without the structure, AI adoption in a company of this size would have fragmented into a thousand individual experiments with no shared learning.
What made this particular structure work was that central governance team didn’t function as a bottleneck, approving or blocking experimentation from above. It functioned as infrastructure: handling licensing, setting guardrails, creating the container that made bottom-up experimentation possible without chaos. That’s not the norm. Most organizations either centralize too hard and slow everything down, or leave it entirely to individuals and lose the shared learning.
The bottom-up layer came through the championship program.
My role sits at that intersection: gathering real signals from my team and bringing them into the broader conversation.
That intersection is where most organizations lose the thread. Leadership creates permission, champions carry signals, the people gradually adapt.
The first tool
GitHub Copilot arrived first, a natural fit for a team already working in VS Code, IntelliJ, and Eclipse.
The initial reaction was genuine amazement: it could read the files we had open, understand context, and generate relevant code. Then it started hallucinating, referencing functions and patterns that didn’t exist in the codebase. I flagged this to the team immediately.
That arc, wonder then skepticism then discernment, is how adoption works. A slow calibration process where people learn where a tool is reliable and where it isn’t.
The team eventually landed on the use cases where the tools earned their place: troubleshooting, utility functions, unit tests, the work that is necessary but that nobody gets energized by.
The work that required eyeballing, going through massive application logs, scanning a large codebase for anomalies, these tasks are mentally draining. When an AI assistant significantly reduced that effort, the value became undeniable. People could redirect their energy toward higher-priority work. Once engineers experienced that firsthand, the conversation shifted from ‘should I try this?’ to ‘how do I get more out of it?’
This is the shift I look for with every organization I work with. AI stops being an initiative and starts being a habit the moment it removes something people resented doing. The tool doesn’t need to be impressive, it needs to be helpful.
The resistance without a name
Dylan started asking teammates to request access.
The response wasn’t hostility. It was indifference. Most people were stretched with their existing workload and didn’t have the bandwidth to explore a new tool.
Indifference is the form resistance takes in technically competent teams where cognitive load is already full, and adding one more thing, even a useful one, requires extra energy.
This is something organizations consistently misread as low adoption and diagnose it as reluctance. The actual problem is often capacity. People can’t explore what they can’t afford to be curious about.
Dylan traces this back to something most AI adoption content skips entirely.
It comes down to something most of us lose growing up, natural curiosity. I might be lucky in that regard. I’ve always gravitated toward new technology and taught myself how to program years ago. Most adults are fighting to keep their heads above water. Survival leaves little room for learning something new, let alone something that challenges how you have worked for years.
That observation matters. The problem isn’t attitude toward AI. It’s that there’s no bandwidth left for curiosity once survival is taking up all the space.
There was also something quieter happening underneath the indifference.
No one in my office said ‘what’s my role now?’ outright but I noticed a subtle embarrassment when people mentioned using AI in their work, as if admitting it made them less capable.
That embarrassment is a signal worth paying attention to. In organizations where professional identity is built on expertise, on being the person who can write the code, read the logs, debug the system, AI can feel like an admission.
Dylan’s response was practical. He made his own usage visible.
I started sharing openly, and proudly, how I use AI to speed up my troubleshooting process. I would walk through how I prompt the AI to self-validate its answers, which helps reduce the anxiety engineers feel about hallucinations. Our dev leads reinforced this by sharing their own usage and initiating more conversations about it. Gradually, the atmosphere shifted. Talking about AI at work became normal instead of something to hedge around.
Culture change doesn’t happen through policy. It happens through repeated examples of what is acceptable to say, admit, and try. When people with credibility make their own use visible, it gives others permission to stop hiding theirs.
The identity piece
Here is where Dylan’s account goes somewhere most AI adoption writing doesn’t.
Honestly, the first form of resistance was my own. When these tools landed, they felt magical, but with a sharp edge of fear. If I don’t have to write code anymore, what’s my identity as an engineer?
Every AI initiative is also an identity shift which affects people who have spent years building competence, authority, and self-concept around specific skills.
Dylan wrote publicly about this, a post about the ego developers carry around manually writing source code.
The post was blunt. His framing for what to do about it was equally direct.
We spent years writing code manually. The moment LLMs appeared, it felt like cheating. It felt like our hard-earned skills devalued.
Your job is to solve a problem. Not to manually type out the code. AI promotes you as an engineer, helping you look at things at a higher level. Don’t let your ego attach to hand-rolled code and slow down your impact.
Three camps emerged in the comments almost immediately. Engineers who felt the shift from coder to problem solver was overdue. Engineers who argued that hand-writing code isn’t ego but ownership, and that the people most enthusiastic about AI-generated code are often the ones least equipped to spot when it’s wrong. Engineers somewhere in the middle who weren’t ready to commit either way.
Dylan tried responding to every comment but the sheer volume of the debate was too much and he found the friction wearing.
Dylan was wired for this transition, self-taught, naturally curious, already experimenting before the company made it official. He had an easier on-ramp than most, but he noticed the shape of it in others, the hesitation, the embarrassment, the indifference that is really just overload dressed up as disengagement.
The companies that navigate this well don’t ignore the identity piece. They make it explicit. They show people that AI is augmenting the expertise they’ve already built, not replacing the person who built it.
The lesson learned
One of the most useful parts of this conversation was Dylan’s willingness to name his own blind spots.
I drew the line too sharply between learning AI for myself and applying it at work. On my own time, I was taking courses, studying AI engineering deeply, and staying on top of every development. But at work, I was passive, attending AI champion meetings as a listener, treating the role as a relay station between management and my team. I assumed this was a top-down process, and my job was to execute instructions.
That gap between personal learning and organizational contribution makes sense when you think about it. Outside work, there are no stakes, you can break things, go down rabbit holes, try tools that might be useless. At work, time is accounted for and experimentation feels like a luxury you have to justify. The people most fluent with AI are often the ones who built that fluency on their own time, then showed up to work waiting for permission to use it.
I am now seeing how I can contribute beyond that: suggesting process improvements, sharing knowledge and industry developments, and building internal tools. It took me longer than it should have to realize the role was what I made of it.
The champion role is only as useful as the person decides to make it. Dylan’s shift is the kind of quiet transformation that doesn’t show up in adoption metrics but changes what’s possible for everyone around him.
The measurement gap
How do we prove, tangibly, that AI tools are improving efficiency? Most tech companies weren’t tracking developer productivity before AI arrived, so there’s no solid baseline to compare against or justify spending.
This is one of the most honest things anyone has said about AI adoption measurement.
Organizations want evidence before they build the measurement system that would actually produce evidence. Dylan’s framing of it as a shared problem rather than anyone’s fault feels right.
Both top-down and bottom-up approaches need to collaborate to find better ways to measure impact across different work functions. Without that shared effort, optimizing AI investment remains guesswork.
The companies that solve this are the ones where champions and leadership sit in the same room, agree on what matters, and build the baseline before the next tool arrives.
The talent take
Dylan’s view on talent is counterintuitive enough that it deserves its own section.
Don’t hire a wave of ‘AI-literate’ people to replace an existing team that understands the business deeply. Instead, empower those people with AI. Employees with product knowledge and historical context are extremely valuable, especially in the AI era, where the tools amplify domain expertise rather than replace it.
We both agree on this. The organizations tend to bring in AI-native hires to signal progress, and those hires don’t know the product, the customers, or the institutional history that makes the business run. Domain knowledge is not a liability to be replaced. It’s the thing AI actually needs to be useful.
Dylan adds a sharper edge.
Some AI-native candidates try to solve everything with AI and lack the ability to think through problems using first principles. The stronger hire is someone with solid work experience and exposure to AI. The companies that over-index on AI-native candidates will find out the hard way.
Adaptability matters too. Tools will keep evolving, and the engineers who treat learning as part of the job rather than a one-time event will compound their value over time. The ones who don’t will find the gap harder to close, whether they were AI-native from the start or not.
The Singapore layer
The context Dylan operates in adds texture that doesn’t show up in Western AI discourse.
From what I have observed, there’s more anxiety than excitement around new technology in Singapore compared to what I read from the West. I have spoken to many friends across different industries. The dominant tone is pessimism. Very few think about AI positively, and even fewer are actively learning new skills around it. On the other hand, the Singapore government provides extensive grants and subsidies for AI upskilling. The infrastructure for learning exists. The appetite, in many cases, doesn’t.
Access and readiness are not the same thing. A place can have every resource in position and still struggle with motivation, especially when people are already carrying the cognitive weight of keeping up with their existing work.
Dylan also flagged the hierarchy that matters for any organization thinking about adoption structure.
I have seen and heard about organizations, particularly in parts of Asia, where rigid hierarchy makes bottom-up feedback nearly impossible. Everything flows top-down. When AI adoption demands rapid experimentation and honest feedback from the people closest to the work, that structure becomes a bottleneck. The companies that can’t flatten their feedback loops around AI will struggle the most.
AI adoption is not a one-time rollout. It’s a continuous calibration process that depends on real signal from the people actually using the tools. If hierarchy prevents that signal from moving upward, organizations are flying blind.
What’s still in progress
Dylan’s company is doing well by most measures. AI-assisted coding is normalized. Engineers are talking about agentic workflows and MCP server integrations. The atmosphere shifted from hedging to experimenting.
That progression, from individual tool use to workflow design to agentic systems, is what mature adoption actually looks like. Not everyone using the same tool the same way, but people designing their own systems around it. The governance structure that arrived at the beginning made this possible. It gave people enough clarity and safety to experiment their way toward sophistication.
The knowledge gap is still real.
Some engineers are running multi-modal agents in sophisticated coding workflows. Others are treating Cursor like a ChatGPT window. Knowledge sharing is the lever.
That spread exists in every organization. The tools are in place but the variance in how people use them is enormous. The next problem is depth, not access, and it requires champions who are willing to teach, not just relay.
Final thoughts
Improve yourself first. If you want to convince others to embrace AI, you need to know how to use the tools deeply, not surface-level demos, but real workflow integration. Understand how the tools work under the hood so you can make informed comparisons. And learn how to communicate a concept clearly, whether you are explaining it to leadership or to a junior engineer. Teaching is the fastest way to learn, and credibility is earned by doing, not by title.
The champions who move their organizations are not the ones with the most enthusiasm or the best title. They’re the ones who are doing the work, visibly, and making it safe for others to admit they’re figuring it out too.
Dylan Oh runs Zero Address publication on Substack and is a software engineer and AI champion based in Singapore.
Anna Levitt runs how to boss AI and is the founder of Bubble Boss Co, an AI readiness consultancy focused on the human side of AI adoption in mid-market organizations.








Loved the collaboration, almost like listening to your subconscious lay it out, 'like this!' The leadership mirror section is the piece most AI writers would never think to include. Great perspective.
The embarrassment Dylan describes isn't really about the tool. It's about a professional identity built over years around being the person who could do the thing. AI asks you to update who you decided you were. That's a much harder ask than learning a new skill.