This post has been translated from Korean to English by Gemini CLI.
While watching the video, I thought the content was so good that I decided to post a simple summary on my blog and write down my thoughts lightly below.
For developers, this is truly an era of AI chaos.


Opinions are divided on whether developers' jobs will disappear or remain.
There is also talk that senior developers are at risk.
=> In short, it's chaos.
What is certain is that
a developer will no longer be a fixed job function, but a fluid concept that will be constantly redefined with AI.
There is no clear term that everyone agrees on.
etc.
Coding using AI has a very wide spectrum.
-> The spectrum is defined in stages according to the level of autonomy. Link
GitHub Copilot came out 3 years ago...!
CDHF: Context Decision with Human Feedback - Reinforcement learning based on human feedback
Integrated into IDEs, etc.
I used to be bad at Spring development, but now I'm great at it.
Human-AI collaboration exists on a diverse spectrum beyond a simple dichotomy of human-led and AI-led.
-> You need to be able to select and switch to the optimal collaboration model depending on the nature of the task.
complex domain problems or require subtle judgment such as architecture.-> The way we chat, get information, and interact.
The person is in the loop.
-> The way we ask Gemini-CLI, it writes the code and asks if we want to review it.
The person is on top of the loop.
well-defined and repetitive tasks.-> It does a great job if you give it a task with certain conditions or a repetitive task.
(It can also perform multiple tasks in parallel, like a shadow clone jutsu.)
=> We should not stay in only one of these three models.
We need to become "Mode Switchers" who flexibly switch collaboration models depending on the nature and complexity of the task.
How should we collaborate with AI? An architecture just for AI? An architecture that AI creates itself?
-> That's not the story for now.
If you ask AI to do TDD, it just creates a test, creates the code, and passes it all at once.
AI may not need such a process.
The common consensus is that what is considered good architecture and code for humans is also preferred by AI.
In particular, since AI has learned from code snippets uploaded to the internet, it converges to an average level.
AI-specialized companies like Claude also say they prefer TDD when creating tools.
Instead of humans creating the code and having AI pass it,
have AI create the tests and then pass them.
Nowadays, you don't even need to type, the recognition is so good.
Why is TDD important for collaboration?
AI is a kind of slot machine haha (the code that comes out today and the code that comes out tomorrow are different every time)
but don't modify the test code"Many companies around are interested in productivity.
(If AI does 50%, does that mean we don't have to hire 50% more developers? - Developers are very expensive...)
Everyone is talking about how fast they can develop with AI.
OR
They are only talking about productivity...
It's bound to come up.
Because AI can produce code about 1000 times faster than humans.
(AI doesn't sleep either.)
=> There is a dilemma here.
Exploitation
Using the best-known choice so far to maximize profit.
Exploration
Going out to find other choices that are not the best to see if there is a possibility of a better choice than the best known so far.
You have to balance these two appropriately.
AI dramatically improves productivity, but at the same time raises concerns about developer skill stagnation or regression.
After a few prompts with the cursor, the development is over. Rather, you can't build knowledge of React or Components and become dependent on AI again.
-> In the end, the overall productivity drops.
(You have to give good instructions, but because of the limits of your knowledge, you can only make requests that way...)
⭐️ A strategic approach that harmonizes the pursuit of immediate productivity with intentional efforts for continuous learning and skill development is necessary!!!
Ultimately leads to sustainable growth and high performance.
There are quite a few attempts to use it in practice.
You no longer need to ask on Stack Overflow, in open chat rooms, or search on Google.
You can grow even more by learning with AI.
When it generates code, it says, Do you want more explanation?, This code is composed of..., but
people just accept it without thinking, the so-called 'click-click'.
(It's like a senior developer next to you giving you instructions without getting tired or annoyed, and you say "I'll do it myself".)
'''
Give me 3 coding problems every morning to learn this skill. (easy, medium, hard)
Go to the official documentation, understand it, and translate it for me. (I'm too lazy for that. Summarize it in 3 lines for me.)
'''
When you decide to use a new technology, you need to have time to practice, not just time to code right away.
(If you're not given time, you have to do it in your own time. Of course. We are experts (pros). We need to practice constantly.)
Developers will eventually be needed for a while. Rather, because they are developers, they may be able to perform even better with AI.
AI is developing too fast.
Even when I was working on a side project in February of this year, I was impressed by Cursor's front-end implementation skills,
but now it's as if it's a given, and it's integrating with other MCPs to become more sophisticated or even encroaching on the back-end.
So, will developers become unnecessary now?
No one can predict (not even Toby), but it seems that the era of developers is still here.
Of course, even if its appearance or work may be different from before.
The biggest thing I felt while using AI was that my ability to accomplish things grew very steeply.
Here, the ability to accomplish things means what we want, making things work.
But the more I do it, the more I wonder if this is really okay.
The code that AI produces varies greatly depending on how I construct the prompt, the context, or the AI's condition.
It's a dilemma whether the code is dirty or perfect.
It takes a very long time to clean up code that has been dirtied by AI once, and in addition, you have to follow the AI's thinking, not a human's.
With code written by a human, you can at least ask Why did you do it this way? or look for comments or issues.
How do I ask AI about this? Do I have to go back to the context, the model, the prompt at that time and ask?
Even if I find it and ask, will it give me a perfect answer and solution?
If the code is perfect, we will have to continue to rely on AI unless we fully internalize it.
How is this different from calling the old TV an idiot box?
I think for now, we need someone to mediate this illusion and contamination of AI.
If we can't have 100% trust in AI, we need to intervene and manage, and people need to develop.
As Toby said above, let's become people who get help from AI, but don't just exploit it, but also learn and explore.