An AI-first approach?

It is an understatement to say that AI is generating a lot of hype. Let's explore one of the buzzwords I’m hearing lately - an “AI-first approach”.

An AI-first approach?
A man facing two possible paths, one path in blue shows the potential for progress and efficiency, while the other path in red hints at the risks and uncertainties
audio-thumbnail
A conversation on this post (generated by NoteGPT)
0:00
/342.619

It is an understatement to say that AI is generating a lot of hype. There are countless headlines about how AI will replace entire fields of work and “1-person company with $1b market valuation”. I will share my thoughts on those topics in due course.

Today I want to discuss one of the buzzwords I’m hearing lately - an “AI-first approach”.

What does an “AI-first approach” mean? Naturally, I asked the LLMs to answer that question.

AI-first approach is a strategy where you automatically think of using artificial intelligence as the primary way to solve a problem or create something new, rather than using it as an add-on or an afterthought.

Think of it like this:

Old Approach: "I need to write a report. I'll open a blank document and start writing. Maybe I'll use an AI later to check my grammar."

AI-First Approach: "I need to write a report. My first step is to ask an AI to gather the initial data, create an outline, and write a first draft for me to edit and refine."

Gemini 2.5 Pro

And again, I asked LLMs to outline some of the pros and cons of this approach.

Pros:

Efficiency and Time Savings
AI can automate routine tasks like scheduling, email sorting, data entry, and basic analysis, freeing up time for higher-value work. People can focus on creative problem-solving and strategic thinking rather than repetitive administrative tasks.

Enhanced Decision-Making AI can process vast amounts of data quickly to identify patterns and insights humans might miss. This can lead to more informed decisions in everything from business strategy to personal finance planning.

24/7 Availability Unlike human assistants, AI tools are available around the clock. This means you can get help with tasks, research, or problem-solving whenever you need it, regardless of time zones or business hours.

Personalized Assistance AI can learn from your preferences and habits to provide increasingly tailored recommendations, whether for work projects, learning paths, or daily life optimization.

Cons:

Over-Reliance and Skill Atrophy
Heavy dependence on AI might lead to weakening of critical thinking, writing, or analytical skills. There's a risk of becoming unable to perform tasks effectively without AI assistance.

Privacy and Data Security Concerns An AI-first approach often involves sharing personal or business data with AI systems, raising questions about data privacy, security breaches, and how information is stored or used.

Lack of Human Context and Nuance AI can miss emotional subtleties, cultural context, or complex interpersonal dynamics that are crucial in many work and life situations. This can lead to tone-deaf responses or inappropriate recommendations.

Cost and Accessibility Premium AI tools can be expensive, potentially creating disparities between those who can afford comprehensive AI assistance and those who cannot. This could exacerbate existing inequalities in workplace productivity or educational opportunities.

Job Displacement Anxiety While AI can enhance productivity, it also raises concerns about job security and the changing nature of work, leading to stress and uncertainty about career paths.

These points should give you a balanced foundation for exploring both the opportunities and challenges of an AI-first lifestyle.

Claude Sonnet 4.0

I could have edited those points and make them the key content of this post, but that’s not what I wanted to talk about.

The truth is … I don’t usually adopt an AI-first approach. In fact, this is the first time I’ve asked LLMs to create content for my writing before I had a first draft.

Am I an old dinosaur? Maybe.

As the response from Claude above noted, there are great efficiency gains by getting AI to create a first cut of the answer, and there are definitely things AI can do that humans can’t.

But the first point raised in the cons - skill atrophy - is a real risk in my opinion.

For those of us who exercise, you’d know that your fitness level deteriorate (pretty quickly) when you take a break because of holidays or illness. Body muscles and aerobic fitness need to be maintained by regular workouts.

I remember reading a post on LinkedIn a few months ago - someone said they had become reliant on LLMs to draft content, and they felt lost when they didn’t have access to AI models. As someone who loves to write - that possibility filled me with dread.

There was a recent study by MIT on Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task. I saw breathless commentary on whether ChatGPT is rotting our brain. Of course, the actual research paper is more nuanced.

I read the abstract and scanned sections of the paper - here’s a decent explanation of the research and key findings:

The researchers divided participants into three groups:

LLM group: Used ChatGPT to write essays.

Search group: Used traditional search engines.

Brain-only group: Wrote essays with no external tools.

Using electroencephalography (EEG) to measure brain activity, Natural Language Processing (NLP) to analyze the essays, and interviews to gauge the participants' sense of ownership, the study found the following:

Reduced Brain Activity: The LLM group showed the weakest and least widespread neural connectivity, indicating lower cognitive engagement compared to the other two groups. The "Brain-only" group showed the highest level of engagement.

"Cognitive Debt" in Action: When participants who had consistently used ChatGPT were asked to write an essay without it, their brains remained relatively under-engaged. This suggests that a reliance on the AI tool had created a "debt" that wasn't easily "repaid."

Loss of Ownership and Recall: The LLM group reported the lowest sense of ownership over their work and had more difficulty accurately quoting from the essays they had just written.

Homogenized Writing: The essays produced by the LLM group were more formulaic and similar to each other, lacking the unique vocabulary and conceptual range seen in the essays from the "Brain-only" group.

Strategic Use vs. Reliance: Interestingly, when participants from the "Brain-only" group were later allowed to use ChatGPT, they showed high neural engagement. This suggests that using AI as a tool to augment existing skills is different from relying on it from the start.

Gemini 2.5 Pro

(I can’t wait to nerd out and read the differences in brain activities across the 3 groups - I didn’t study science beyond high school but I enjoy reading scientific research papers.)

What does this mean on a practical level? Will adopting an AI-first approach make us dumber? The jury is still out. The sample size in this study (54) is small and it covered one specific task (essay writing) in an laboratory setting.

I’m sure we’ll see more research on this area going forward.

In a way, we’re also running our own experiments. We can experience how differently we engage with a task at hand based on whether/if we use AI to help. Are the responses from the LLMs opening new doors? Or do we feel detached from what we’re doing because it becomes a series of Q&As with the LLMs?

Personally, I enjoy thinking and writing, and in the foreseeable future, I’ll continue to write a draft (maybe as simple as a list of bullet points) before I engage LLMs for research or ask them questions on a topic.

However, there are definitely things that I will use AI first - e.g. creating a draft presentation deck based on my talking points. I also use LLMs to generate the images for my posts because I have no skills in creating visual content.

To take a step back from the question of this post - this article on the research paper is thought-provoking. I like the analogy between the arrival of LLMs and the introduction of calculators in schools. Instead of asking if we should take an AI-first approach, maybe the question is more on how AI will fundamentally transform the way we think and problem solve in the future.

MIT researchers say using ChatGPT can rot your brain. The truth is a little more complicated
Like calculators before them, AI tools can raise the bar for what people can achieve – if they’re used the right way.

Why don’t you …

Conduct your own MIT-esque experiment by picking a problem/task and approach it in 3 ways - LLM first, search engine first and brain only.

Take note of how differently you think and engage with the task, the qualities of the output, and what you can recall a day after you finish the task.

Until next time!

Vee