A journalist is tasked with writing a profile of a well-known politician on a tight deadline. With the interview only hours away, she asks ChatGPT to prepare a list of questions. She shares the 30 questions she generated in less than a minute with her editor to ensure that no stone is left unturned. The editor nearly rewrites the entire list. It lacks questions regarding critical early-life experiences, why the senator dropped out of college, how she parted ways with her first campaign manager, and other topics.
All of these unanswered problems originate from a bigger context and years of perfecting editorial judgment—things that AI cannot replace.
We are beginning to recognize the limitations of generative AI tools, such as ChatGPT, which has over 800 million weekly active users and is genuinely becoming household names, according to Reuters. Researchers refer to this as the “AI wall” since there is a limit to how much general AI can assist people with tasks outside of their area of specialization. It emphasizes how important it is for professionals to continue honing human traits like curiosity and sound judgment. Leaders who ask better questions can make better decisions, build stronger teams, and use AI more effectively in today’s AI-driven workplace. Here are three leadership techniques that distinguish between using AI and using it well.
Contextualize each AI task in the broader picture
Understanding the big picture, as demonstrated by the journalist example, stays firmly within the domain of human intellect. That entails understanding not only the task at hand, but also its purpose and how it fits into broader individual or organizational objectives. If an editor wants a profile to shed light on a changing political scene, for example, that backdrop should influence the tone and direction of each query.
Leaders are ideally positioned to assist teams in framing questions with the greater aims in mind, rather than chasing every available insight. This is especially important when employing AI tools, which make it surprisingly easy to perform task after duty without analyzing the “why” of it all, leading in AI-generated work slop.
The most effective leaders pause to consider how much focus a subject or activity requires, rather than how quickly it can be finished, and then steer their people accordingly.
Use outputs as jumping off points
In the early days of generative AI, rapid engineering was a necessary talent. The effectiveness of an LLM session was frequently determined by developing the appropriate prompt. Precision was essential.
As generative AI tools like ChatGPT get more complex and conversational, prompt chaining is gradually supplanting prompt engineering. Prompt chaining divides a task into smaller, more manageable phases that move logically, usually from general queries to more specific ones. For example, if you’re utilizing ChatGPT to create a competitive analysis, your questions could go as follows:
What is the current market landscape in [industry/product category]?
Who are the key rivals in this market?
How does each rival position themselves in terms of value proposition, target audience, pricing, and core competencies?
What are the primary strengths and disadvantages of these competitors?
Every output influences the next prompt, forcing you to constantly modify your questions. For the purpose of efficiency, strategic thinking is still necessary—but the emphasis is no longer on getting it right the first time.
In a word, the most effective leaders use AI outputs as conversation openers rather than final answers.
Develop the judgment that AI cannot replace
Despite their obvious potential, generative AI techniques do not always level the playing field for professionals. Consider this: only 26% of employees who use generative AI claim increased creativity, according to Gallup—not exactly the innovation boost you’d anticipate. The issue is not one of access to technology, but rather how it is used.
Recent research explains why AI improves performance for some people but not others. It all comes down to metacognition—the ability to plan, assess, and improve one’s thinking. According to experts in Harvard Business Review, employees with greater metacognitive skills stand to benefit the most from AI. In reality, this entails reflecting on your own thinking while you work: finding knowledge gaps, adding new information into existing mental models, and modifying your approach as needed. It’s the difference between idly skimming a story and fully understanding it—which technique results in learning?
To ensure that leaders and staff get the most out of AI capabilities, a more proactive strategy is required. Instead of deferring to AI, question assumptions, investigate tradeoffs, and think critically.
Critical thinking allows executives to fully realize the benefits of AI while also assisting junior staff in developing the judgment to overcome its limitations and soar over any AI obstacles.






