Author: danny A friend asked me why I seem to know everything about everything or every field. Aside from past experiences or current projects, I often learn onAuthor: danny A friend asked me why I seem to know everything about everything or every field. Aside from past experiences or current projects, I often learn on

How can an ordinary person systematically understand a vertical field in 4 hours?

2026/03/22 08:00
6 min read
For feedback or concerns regarding this content, please contact us at crypto.news@mexc.com

Author: danny

A friend asked me why I seem to know everything about everything or every field. Aside from past experiences or current projects, I often learn on the fly. Today, I'll share how I use AI tools and Notebooklm to facilitate self-learning for ordinary people.

How can an ordinary person systematically understand a vertical field in 4 hours?

First of all, I want to say that this article is for systematic and structured learning and understanding of a specific field/thing/concept, and building your own knowledge system and graph. If you only need to understand some of the concepts and know what this xx is, then asking the mainstream AI on the market will probably yield similar results.

Using AI to learn about something new currently faces several bottlenecks and limitations:

First, it's an illusion. AI will (most likely) give you some fabricated data and stories, especially in niche areas, because of insufficient corpus and learning materials.

Secondly, there isn't that much detail. Due to issues like copyright, AI won't read through the entire article or book on its own. The training materials are usually other people's reviews and comments, especially information in specific sub-fields.

Third, it is difficult to accurately describe the problem. If you have never been exposed to this topic before, you probably will not be able to describe the problem you want to know well, nor will you know the causes and consequences of these things, let alone systematically and structurally collect information and form a systematic learning framework.

Theoretical section

My approach is actually quite simple: use the academic "quote/reference/impact factor network" to refine information, and then use AI evidence and divergent thinking to engage in a "left-right brain battle" to structurally understand a new thing.

Data-saving workflow:

Find valuable papers - add them to Notebooklm - generate prompts using AI tools - learn through Q&A in Notebooklm - add more valuable papers to Notebooklm - learn through Notebooklm - repeat this process.

Complex workflow:

Step 1: Following the clues (Time: 0.25 hours)

Instead of searching for "what is XX, and what is the principle behind it", directly look for the "pillar of stability" in that field.

  1. Calling on AI (Gemini/Perplexity): Directly ask: "In [a specific subfield], who are the three universally recognized leading figures? What are the 1-3 highly cited classic papers they published that laid the foundation for this field?" (For example, in the LLM field, focus on papers like "Attention Is All You Need "). This represents the "present-day" [experience/representation].

  2. Download first-order references: Extract the references from these 1-3 core articles, and download all the core references they cited. This represents the "past life".

  3. Extracting high-frequency second-order references: Cross-referencing the references of first-order references to select the top 10 most cited articles and the top 5 most frequently appearing articles. This represents the "later" [period/stage].

The core logic: Seeing the world through the eyes of the masters is the cheapest shortcut. Don't underestimate this step; you're downloading a chart of the most fundamental intellectual evolution in this field over decades.

Step 2: Building a structured knowledge base (Time taken: 0.25 hours)

All the classic documents selected in the first step were uploaded to Google NotebookLM at once.

Generally, for classic articles, these two are sufficient: https://scholar.google.com/ or https://arxiv.org/

Why NotebookLM? Because it never creates hallucinations. It answers questions based solely on the information you provide.

Through rigorous literature screening, you have artificially cut off junk information on the internet, creating a pure and highly focused knowledge base for this field.

Step 3: Inter-AI battles (Time: 1-3.5 hours)

This is the core of the entire workflow. You allow AIs with different characteristics to cross-reference within your knowledge base, forming structured knowledge paths and logical deductions, ultimately leading to their own insights.

Replace passive learning with active questioning. Actively asking questions (out of interest) stimulates brain thinking.

  1. Find anchor points: Ask Claude, Deepseek, Gemini, or Perplexity: "Regarding the field of xx, what are the core controversial issues and underlying theoretical frameworks in academia/industry?"

  2. Closed-loop questioning: With these core controversies in mind, return to NotebookLM and ask: "Based on the literature I uploaded, how did the masters answer these core controversies? Please provide specific literature sources and reasoning logic."

  3. A more nuanced approach: Copy the rigorous answers generated by NotebookLM and send them back to Gemini or Claude, who possess strong logical analysis skills. Instruct them: "Critically examine these viewpoints, pointing out logical flaws, limitations imposed by the times, or blind spots. Based on this, what three deeper questions should I ask next?"

  4. Cognitive spiral ascent: Taking the vulnerabilities and new questions pointed out by AI, return to NotebookLM to seek answers.

Practical training

Let me use "What exactly is LLM (large language models)?" as an example 😂

Step 1: Following the clues (Time: 0.25 hours)

I asked both Gemini and Claude – hey, you're the one who gave such an answer!

gemini

claude

Then you suddenly remember your middle school teacher saying that scientific theories must be connected to the past, present, and future. So you ask AI to help you research which papers these core articles referenced (usually in the "literature review"), and which subsequent articles cited these core articles, and let AI help you filter them out.

Step 2: Building a structured knowledge base

Due to some original LLM features and AI permission requirements, we need to download it manually (or you can have your lobster do it for you).

Generally speaking, https://scholar.google.com/ and https://arxiv.org/ are perfectly sufficient.

After downloading, put it into notebooklm (currently one library supports about 300 entries).

Step 3: Inter-AI combat

You can start by asking some simple, intuitive questions on Notebooklm, then discuss and explore your understanding with other AIs, and finally send your conclusions back to Notebooklm for it to refute, demonstrate, supplement, and correct.

Notebooklm's answer and comments:

Repeat this process several times until you are able to create your own mind map.

If you want to be a bit more hardcore, you can ask Notebooklm to give you a test to check your knowledge.

By now, you have a certain understanding of this field (at least you know about past lives, present lives, and future lives, so you can talk for 5 more minutes when someone asks).

postscript

Save your "knowledge base" (and update it in real time, you can let Lobster do it), and create a separate folder - for example, I compiled theoretical articles related to "contract trading" into a separate book. When you need to analyze something, you only need to call up this folder, describe the data and cases, and you can analyze it with basically "no illusions".

It's not that current AI models are incapable of deep thinking and analysis; it's just that you're not using the right tools. (A very important parameter in LLM is the constraint condition and the input condition.)

Using AI is one capability, but making AI empower humanity is another.

Market Opportunity
xx network Logo
xx network Price(XX)
$0.00371
$0.00371$0.00371
+0.27%
USD
xx network (XX) Live Price Chart
Disclaimer: The articles reposted on this site are sourced from public platforms and are provided for informational purposes only. They do not necessarily reflect the views of MEXC. All rights remain with the original authors. If you believe any content infringes on third-party rights, please contact crypto.news@mexc.com for removal. MEXC makes no guarantees regarding the accuracy, completeness, or timeliness of the content and is not responsible for any actions taken based on the information provided. The content does not constitute financial, legal, or other professional advice, nor should it be considered a recommendation or endorsement by MEXC.

You May Also Like

China Blocks Nvidia’s RTX Pro 6000D as Local Chips Rise

China Blocks Nvidia’s RTX Pro 6000D as Local Chips Rise

The post China Blocks Nvidia’s RTX Pro 6000D as Local Chips Rise appeared on BitcoinEthereumNews.com. China Blocks Nvidia’s RTX Pro 6000D as Local Chips Rise China’s internet regulator has ordered the country’s biggest technology firms, including Alibaba and ByteDance, to stop purchasing Nvidia’s RTX Pro 6000D GPUs. According to the Financial Times, the move shuts down the last major channel for mass supplies of American chips to the Chinese market. Why Beijing Halted Nvidia Purchases Chinese companies had planned to buy tens of thousands of RTX Pro 6000D accelerators and had already begun testing them in servers. But regulators intervened, halting the purchases and signaling stricter controls than earlier measures placed on Nvidia’s H20 chip. Image: Nvidia An audit compared Huawei and Cambricon processors, along with chips developed by Alibaba and Baidu, against Nvidia’s export-approved products. Regulators concluded that Chinese chips had reached performance levels comparable to the restricted U.S. models. This assessment pushed authorities to advise firms to rely more heavily on domestic processors, further tightening Nvidia’s already limited position in China. China’s Drive Toward Tech Independence The decision highlights Beijing’s focus on import substitution — developing self-sufficient chip production to reduce reliance on U.S. supplies. “The signal is now clear: all attention is focused on building a domestic ecosystem,” said a representative of a leading Chinese tech company. Nvidia had unveiled the RTX Pro 6000D in July 2025 during CEO Jensen Huang’s visit to Beijing, in an attempt to keep a foothold in China after Washington restricted exports of its most advanced chips. But momentum is shifting. Industry sources told the Financial Times that Chinese manufacturers plan to triple AI chip production next year to meet growing demand. They believe “domestic supply will now be sufficient without Nvidia.” What It Means for the Future With Huawei, Cambricon, Alibaba, and Baidu stepping up, China is positioning itself for long-term technological independence. Nvidia, meanwhile, faces…
Share
BitcoinEthereumNews2025/09/18 01:37
Aave DAO to Shut Down 50% of L2s While Doubling Down on GHO

Aave DAO to Shut Down 50% of L2s While Doubling Down on GHO

The post Aave DAO to Shut Down 50% of L2s While Doubling Down on GHO appeared on BitcoinEthereumNews.com. Aave DAO is gearing up for a significant overhaul by shutting down over 50% of underperforming L2 instances. It is also restructuring its governance framework and deploying over $100 million to boost GHO. This could be a pivotal moment that propels Aave back to the forefront of on-chain lending or sparks unprecedented controversy within the DeFi community. Sponsored Sponsored ACI Proposes Shutting Down 50% of L2s The “State of the Union” report by the Aave Chan Initiative (ACI) paints a candid picture. After a turbulent period in the DeFi market and internal challenges, Aave (AAVE) now leads in key metrics: TVL, revenue, market share, and borrowing volume. Aave’s annual revenue of $130 million surpasses the combined cash reserves of its competitors. Tokenomics improvements and the AAVE token buyback program have also contributed to the ecosystem’s growth. Aave global metrics. Source: Aave However, the ACI’s report also highlights several pain points. First, regarding the Layer-2 (L2) strategy. While Aave’s L2 strategy was once a key driver of success, it is no longer fit for purpose. Over half of Aave’s instances on L2s and alt-L1s are not economically viable. Based on year-to-date data, over 86.6% of Aave’s revenue comes from the mainnet, indicating that everything else is a side quest. On this basis, ACI proposes closing underperforming networks. The DAO should invest in key networks with significant differentiators. Second, ACI is pushing for a complete overhaul of the “friendly fork” framework, as most have been unimpressive regarding TVL and revenue. In some cases, attackers have exploited them to Aave’s detriment, as seen with Spark. Sponsored Sponsored “The friendly fork model had a good intention but bad execution where the DAO was too friendly towards these forks, allowing the DAO only little upside,” the report states. Third, the instance model, once a smart…
Share
BitcoinEthereumNews2025/09/18 02:28
New Crypto Investors Are Backing Layer Brett Over Dogecoin After Topping The Meme Coin Charts This Month

New Crypto Investors Are Backing Layer Brett Over Dogecoin After Topping The Meme Coin Charts This Month

Climbing to the top of the meme coin charts takes more than a viral mascot or celebrity tweets. Hype may spark attention, but only momentum, utility, and adaptability keep it alive. That’s why the latest debate among crypto enthusiasts is catching attention. While Dogecoin remains a household name, a new player has entered the arena […] The post New Crypto Investors Are Backing Layer Brett Over Dogecoin After Topping The Meme Coin Charts This Month appeared first on Live Bitcoin News.
Share
LiveBitcoinNews2025/09/18 00:30