5 strategies for learning better with AI (and pitfalls to avoid)

Recently, I shared my predictions for how artificial intelligence will change the learning process. In short, I expect AI will reduce the need for many skills, potentially making the typical user less capable.
Just as the invention of writing made us worse at remembering and the invention of the calculator made us worse at mental arithmetic, having a technology that can read email, write code, and do homework for us is bound to make most of us less good at doing those things ourselves.
But I think the average here masks a lot of variation. Many people will learn less and think less, but some will learn more.
I have conducted numerous experiments with artificial intelligence in my own research and studies, with varying results. For some aspects of learning, the AI does a great job, saving me hours of unnecessary effort. But in other cases, the results are mediocre, and in some cases, they’re downright misleading.
Here are some of my takeaways from using AI as an accelerated learning tool:
1. Select Which Books to read (but you still need to read them).
Artificial intelligence is a great tool for book recommendations. I’ve used it extensively on recent Foundations projects, often using ChatGPT and other tools to recommend books based on fairly specific criteria to round out my reading list.
But while AI can provide some great suggestions on what to read, these AI summaries of books are not a good substitute for reading the books themselves. Some of this is a matter of verifiability (more on that later), but it’s a matter of general summaries – you can’t get into the idea just by skimming. Only by reading a book in its entirety can you truly learn and understand the examples, knowledge base, and author’s perspective, allowing you to use this information to reason about other things.
Artificial intelligence can help you discover whether a book isn’t worth your time, so you can read from curated lists that match your interests, ability levels, and knowledge gaps.
2. Seek alternative advice (but think for yourself).
In responses to my recent article, a common refrain is that A.I. seem Good at doing things you’re not good at, but whenever you’re very competent in a certain area, the AI’s advice seems pretty bad. Kind of like an artificial intelligence version of Gell-Mann’s amnesia.
For example, I discovered that AI is a terrible ghostwriter. Even for small things where I had no ethical qualms about using AI, like generating paragraph-level summaries of lessons I wrote for courses, I found AI summaries to be poor and ended up writing them myself.
Likewise, I found that AI was not helpful in providing me with business advice, generating flashcards, or designing courses on topics I wanted to learn. Some of this may be an issue with my skills, or lack of good prompts, but the fact that I generally get satisfactory results in other areas makes me think that the problem is simple and my standards are much higher here.

But even if AI doesn’t help with the things you know best, its breadth is incredible. Often what AI does best is show you extensive knowledge outside of your area of expertise and provide suggestions you may not have heard of before.
When using AI to help me solve problems, I find it helpful to ask the AI to suggest alternative ways to solve problems that I may not have considered. This often opens up solution paths that I would never have explored on my own.

3. When accuracy is important, ask for verifiable answers (and fact-check key information).
Everyone is making a fuss about the hallucinations of artificial intelligence. I agree they are a problem, but all sources of information have factual errors, so the problem is not limited to AI. When I was doing research for my last book, I was struck by how often even peer-reviewed papers cited inaccuracies.
I think the real problem with hallucinations is not the error rate, but the pattern of errors being very dehumanizing. Generally speaking, when you write something that looks like a well-researched article, it’s less likely to contain factual errors than an off-the-cuff comment on a podcast. Artificial intelligence breaks these conventions because it is just as susceptible to hallucinations when writing in a seemingly meticulous style as it is when improvising.
There’s a simple way to solve this problem: ask the question in a verifiable way. Example:
- Don’t ask for a quote, ask for the source of the quote (so you can check the original text).
- Ask for a link to the original paper, not just the abstract.
- Ask for code, not just the output of the analysis.

Some other tricks include asking the AI to double-check answers (this often initiates “inference” mode and may induce some hallucinations), placing the source document into a chat window and asking the AI for its location in the document to save you time verifying information (Google’s NotebookLM is handy for this).
4. Start by generating scaffolding when practicing skills (don’t just ask to “learn”).
One of the major weaknesses of my early Super Learning projects was the lack of good learning materials. Some MIT open courses have rich problem sets and solutions, while other open courses have very few. Some languages have excellent resources, while others have almost none. In terms of how easy it is to master a new skill, the difference can be worlds wide.
Artificial intelligence has the ability to solve many of these problems in finding good practice materials, although it will introduce new pitfalls.
For example, one of my early efforts at using AI learning was to have ChatGPT train me in Macedonian grammar. This is a real help as there are very few learning resources for Macedonian and grammar can be a major sticking point in learning.
Overall, the AI prompts worked well and I got great feedback on things like correct use of clitics and verb conjugations. But after an hour or so of practice, the AI tends to get stuck in “loops,” where the sentence patterns converge on some variation, and I end up practicing the same thing over and over again.
One solution I found to this problem was to create some scaffolding. In the case of Macedonia, developing a curriculum that includes a list of words, grammatical patterns to practice, and various contextual modifiers, and then prompting the AI to practice following these different structures, helps avoid the “loop” problem.

I haven’t done a math-intensive project since the advent of generative AI, but my strategy is similar to that in technology. Don’t just ask the AI to help you learn math, but get a list of questions for the types you want to learn and ask it to come up with variations of the questions. Therefore, AI can help you practice deeper techniques with various superficial differences.
5. Use AI as a tutor (not a teacher).
Based on the suggestions of many readers, I was initially interested in the idea of using artificial intelligence to help with course design. The first step in a self-directed study project is often one of the most difficult – designing a study project when you know very little about the topic you want to study.

However, I find that AI is actually pretty bad at this. The problem is not that AI is unable to generate lessons for a subject, but that it seems to lack awareness of student levels and how to prioritize teaching relevant concepts.
For example, I think a course on transformer architecture would fit within an LLM’s wheelhouse – it’s generally very familiar with AI-related topics, perhaps because of the glut of explainers online. But the result was chaos. It always wants to delve into the latest cutting-edge optimizations before explaining basics like attention mechanisms. Despite spending a few hours fiddling with the prompt, I couldn’t get it to produce anything close to the median ranking explanation I found by searching online.
Therefore, I usually avoid “teach me this topic” inquiries. Currently, these requirements seem to be better met by actual people, perhaps because they have better mental models of the teaching sequence.
However, the fact that AI can provide (usually) helpful advice if you ask really tough questions is very helpful. I often switch back and forth between ChatGPT and the book I’m reading whenever I’m confused or have a question the author doesn’t answer. AI tutoring seems to work better than AI teaching because the way you naturally ask the AI specific questions is enough to constrain the answer, making it more likely to give you what you want.
Bonus: Vibe coding apps to solve your problems (but don’t reinvent the wheel)
Most of the time I use AI and a simple chat interface is enough. However, for certain learning tasks where you want to repeat the same process again and again, creating an app to do the job for you may be more consistent than repeating prompts.
I’ve made a few learning aid utilities, such as one that takes a Chinese YouTube video as input, transcribes it if there are no subtitles, extracts key words and their definitions, and gives an English summary; another that helps with drawing by breaking down reference photos into dark/light/mid-tones. Recently, I generated some custom Anki flashcards for Macedonian that contain text-to-speech audio and sentence variations.
My suggestions for creating this kind of quick and dirty app to solve personal learning problems are:
- Ask the AI to write the entire content as an HTML file using JavaScript. While this isn’t best practice for real-world applications, it means you can just download the file to your desktop and run it in your browser.
- While just using text prompts will work, it can be helpful to draw the interface on paper and take a photoand can also give specific examples of similar software/applications.
- If your app idea is fairly complex, first ask the AI to create specifications, then ask it to build on those specifications. Oddly enough, this seems to work better than building the app all at once.
- Get an API key so you can query the AI in your application. If you’re not sure which one to use or how to do it, ask the AI when building your app. I use it for my Chinese video assistant utility because the AI does the work behind the scenes of English transcription, translation, identifying keywords and summaries.
While Vivi coding is much faster than writing your own app from scratch (even if you know how), it’s still slow compared to using something off-the-shelf. Before you start building something, ask your AI if what you’re looking for already exists. Custom solutions are probably best reserved for unusual problems or when your personal attributes are not well represented in the market.
How can you use artificial intelligence to assist your learning? Please share with me in the comments some of the ways you’ve used AI to learn new things and deepen your skills.



