Recently I’ve come across an issue that initially was driving me crazy and I couldn’t figure out what really is the root cause there. I’ve used gemini-cli (gemini3 pro) and copilot-cli (sonnet 4.5) to analyze the issue but they aren’t very useful at the end.

The issue is some of my tests started failing after I merge master branch into my current feature branch and I have no idea why they broke. The error is from starting up the spring boot application using the “@SpringBootTest”. After a few hours battling with AI and the fix they suggested was not really great (they kept refering me to a fix that 1. introduce different configuration to test environment 2. also cause the other issue), I am really not happy with AI at all on this issue. As a result, I started to take a step back and did old school way: use google to search the error trying to understand what this error means.

To be honest, AI is everywhere nowadays. Even google search also adds the AI response at the top but its response is useful unlike the clis! It actually gave me a summary on 3 possible root causes of this error and how to troubleshoot it. The first cause immediately caught my attention and rang a bell off the top of my mind. I followed the instruction and eventually resolved the issue with just a simple line of change. It turns out this is indeed caused by a recent commit coming from master branch. At the end, I am really glad that I’ve resolved the issue by myself although I had some guidance from AI in the google search.

Looking back how I resolved this issue, it actually makes me feel scared that I lost one of the important characteristics of google engineer: Critical thinking for troubleshooting issue. All I did initially was just starting up the AI clis and described the problem in the hope that AI will fix it magically. We all know AI (a.k.a LLM) is not the omnipotent but the more you use it, the easier you will get trap into relying on it.

What’s the implication behind this? You get used to delegat everything to AI and at the end, you lose that critical thinking anymore. That’s why I think the experience really plays an important role here. Senior engineers have already established an effective workflow to tackle problems in their long career. when AI is not really working, they can easily go back to their old school ways to troubleshoot the issue.

For junior engineers, I strongly recommend you resisting pasting the error into AI chat window and let it do the thinking process for you. Instead, start from the bottom and try to understand what that error really means and don’t get panic. Collect the foot prints from all places that you have access to(e.g logs and code) depends on the issue. Interact with AI in different ways: ask the reason instead of solution.