Recently, Professor Liu Run and entrepreneurs from the "Wen Dao Global" program took a business trip to the United States.

They visited several overseas companies and experienced the global tech wave firsthand at the CES exhibition. Throughout this trans-Pacific business exchange, Professor Liu equipped each team member with an AI-powered tool—the DingTalk A1 Voice Recording Card—and used it throughout the journey for recording, communication, and collaboration.

Let’s hear his first-hand experience:

Today, I’ll boldly share with you my personal methodology—and a special companion we brought along on this trip: the DingTalk A1 Voice Recording Card. It’s a note-taking card, a translation card, and also a personal AI assistant that travels with you.

Where should I begin?

Let’s start with something we often need during travel: flying.

01 Node-Based Input

Due to frequent business trips, I spend a significant amount of time on airplanes every year. On the “Wen Dao Global” journey, flight time is even longer.

Besides necessary rest, I usually use these hours for reading. At 10,000 meters above ground, free from WeChat messages and phone calls, it’s the perfect environment to dive into books that require deep focus.

But how do I read?

Many people adopt a “linear reading” approach—like listening to a cassette tape, starting from the first song on Side A and ending with the last on Side B. Knowledge flows sequentially into the brain.

This method feels natural, but comes with a drawback: poor retrievability. When you want to recall a specific piece of information, you might have to go through the entire sequence again.

That’s why I prefer another method: “node-based input.”

What is “node-based input”?

In short, instead of viewing knowledge in a book as a continuous line, I see it as a network made up of countless knowledge nodes. My goal isn’t to memorize the entire network, but to identify valuable nodes during reading, tag them, and store them in my “second brain.”

After all, what truly matters isn’t what you’ve read—it’s what you can recall and reactivate.

This feeling is like walking through a forest. Every time I encounter an interesting tree, I pull out a GPS tracker, record its exact coordinates, take photos of its key features, and perhaps jot down some reflections. Then, I save this “data package” into my personal map system. From then on, that tree becomes a node on my knowledge map—one I can instantly revisit anytime.

Later, when I need this knowledge—for writing articles or making decisions—I can quickly retrieve and recombine these nodes.

For example, during the flight to the U.S., I read several books.

Some insights about AI were particularly inspiring.

So how did I tag these insights? Pausing to take notes on my phone or laptop would break my immersive reading flow. Instead, I reached for the small card attached magnetically to the back of my phone and held it down. It gave a slight vibration—a tactile confirmation saying, “I’m listening now.” Then, without stopping, I continued reading while speaking into it: “This book mentions an interesting point about technological suppression and diffusion. It says…” Another long press, another vibration—this time signaling, “Got it, saved.” Thus, the act of “tagging a node” was completed with minimal disruption to my thinking process.

This little card is the DingTalk A1 Voice Recording Card we brought on this trip. Its slim design allows it to stick magnetically to the back of the phone, making it nearly imperceptible. The Type-C connector also means one less charging cable to carry.

After landing, I opened the DingTalk app on my phone. The voice memos recorded mid-flight had already been automatically synced and transcribed into text, neatly filed under the category “Reading Notes.” The duration of each recording also helped me quickly assess its complexity.

I then wrote a custom prompt to assign AI the role of a “reading assistant,” instructing it to generate structured notes containing elements such as “book title, core insight, my reflection, related concepts.”

This entire process represents a complete cycle of “node-based input.”

I only focused on identifying “which trees are worth tagging.” The tasks of “locating, photographing, and storing in the map” were all handled by the tool.

Of course, books represent structured, static knowledge. But stepping into the CES exhibition hall, where fragmented information floods in from all directions—that’s where the real challenge begins.

So, what do I do?

02 Digital Twin Method

The CES exhibition hall is a massive flood of information.

It spans multiple venues equivalent to over 30 football fields. It’s so loud that without a megaphone, you’d practically have to shout to communicate. The information density is so high that every few steps bring three or four new products and fresh concepts into view.

In such an environment, our brains activate a self-protection mechanism. We instinctively filter out information we deem unimportant, remembering only the most striking or novel fragments. In psychology, this is known as the “cocktail party effect.”

However, for business observation, this self-filtering can be fatal.

Because many meaningful details often hide within the “background noise” we choose to ignore.

So what’s the solution?

Willpower alone cannot withstand an information flood. Expecting yourself to remember everything is unrealistic.

That’s why I adopted a “digital twin” method.

What is the “digital twin” method?

The term originates from the industrial world, referring to creating a 1:1 digital replica of a physical entity—one that can be synchronized and traced over time. Applied to information processing, it means not relying solely on your brain to “remember” everything, but using an external tool to create a fully traceable “digital copy” of your entire input process.

Your brain is a super-powered central processor. Its greatest strengths lie in thinking, analyzing, connecting, and creating. External tools, meanwhile, function like external SSDs—they excel at faithfully storing data.

At CES, the brain’s primary task should be focused observation, immersive conversation, and sharp thinking. If you force it to simultaneously handle data storage, its performance plummets. You may find yourself missing the next three crucial points just to capture the current one.

So upon entering the CES venue, I detached the DingTalk A1 Voice Recording Card from my phone, powered it on, and placed it in my shirt pocket.

Then I ignored it. It did its job. Over the next few hours, it acted like a sponge, silently capturing all surrounding audio using its six microphones and 45-hour battery life—my commentary, entrepreneurs’ questions, conversations with exhibitors.

And I focused on mine. Knowing everything was being recorded freed me to stay fully present.

In the end, it produced a comprehensive audio log titled “CES: Technology Exhibition Visit Record and Commentary.” This AI-transcribed document—tens of thousands of words long, complete with timestamps and speaker labels—became my “digital twin” of the CES visit, serving as the factual foundation for further analysis and content creation. Later, it even became one of the raw materials for [that article](https://mp.weixin.qq.com/s?__biz=MjM5NjM5MjQ4MQ==&mid=2651780487&idx=1&sn=e082c6571c8aaf3dead98d6fe3367c89&scene=21#wechat_redirect) I published the other day.

For instance, this paragraph from the article came directly from that recording:

On site, I saw a company specializing in “violence detection.” Their cameras don’t recognize who you are or whether you’re smiling—they only detect violent behavior. Whenever someone pulls out a knife or raises a fist, the system identifies it as violence and can even trigger emergency alerts if needed. This might not be useful on Chinese streets, but in areas with poor public safety, it could be a life-saving necessity.

Yes. Today, many people are racing to adopt AI.

But without high-quality process data, even the smartest minds and AI systems can only engage in repetitive, low-level thinking.

Perhaps the fundamental skill of our era is this: record first, think later.

Creating a digital twin solves the problem of information storage. But at international events like CES, another common issue arises.

Language barriers.

So, what do I do?

03 High-Fidelity Communication

At CES, you get rare opportunities to talk with entrepreneurs and technical experts from around the world—valuable learning moments.

Yet such exchanges often lack “fidelity.”

In casual chats, getting the gist is enough. But in professional discussions, we aim for 100% precision—especially when talking about technical details or business models. Because a 1% misunderstanding in a 99%-accurate interpretation can still lead to major misjudgments.

This happened to us at CES.

At one booth, we met an expert from Google Waymo and discussed autonomous driving technology routes. While I flatter myself that I have no trouble communicating in English, during such dense, highly technical conversations, I needed to ensure that terms like “LiDAR point cloud density,” “end-to-end algorithms in pure vision solutions,” and “edge cases” were received with zero loss of meaning.

So I pulled out my phone, opened DingTalk, and activated the “face-to-face translation” feature.

I placed the phone flat between us. The screen split in two: the half facing me displayed Chinese; the half facing him showed English. The text also auto-rotated 180 degrees so he could easily read it. No more awkward passing back and forth.

I spoke in Chinese—immediately, he saw precise English translations. When he replied in English, I saw fluent Chinese translations.

Then, we could simply focus on the conversation. Meanwhile, the entire bilingual dialogue was fully recorded by the DingTalk A1 Voice Recording Card in my pocket, generating a bilingual meeting summary. This transcript later became part of the “Wen Dao Global” knowledge base, available for future content output.

So you see,

Technology shouldn’t make you feel its power—it should make you forget it’s even there.

Alright. Now, a full day’s worth of information has been stored “high-fidelity” in the data warehouse. But this warehouse is too large and chaotic.

For example, back at the hotel at night, I couldn’t recall who said a key comment earlier that day—or in what context.

What do I do?

04 Conversational Retrieval

Long ago, we used writing to record thoughts and experiences, fighting against forgetfulness. This “note-taking” approach to information processing laid the foundation of human civilization.

Later, we moved notes into computers, gaining powerful search capabilities. In theory, as long as you remember a keyword, this “digitalized” method lets you retrieve any recorded information.

But now, a new form of “conversational” information processing is emerging.

What is “conversational” information processing?

Keyword search is like using index cards to find books in a vast library—you need to know the title, author, or classification number. If you only vaguely recall “a blue-covered book about robotic arms yesterday afternoon,” sorry, index cards won’t help.

Conversational retrieval, however, is like asking the librarian directly: “I’m looking for a book about the flexibility of robotic arms—cover might be blue.” Then, using their understanding of the library, the librarian quickly finds it for you.

Therefore, to enable flexible access, you first need a knowledge base that understands human language.

For example, late at night in the hotel, I vaguely remembered a discussion about “robotic dexterous hands” during an afternoon visit to a robotics section. I wanted to include it in an article.

So I opened DingTalk and asked the AI via chat: “Who mentioned something about robotic dexterous hands today?”

Within seconds, AI responded. It quickly located the relevant segment in “CES: Technology Exhibition Visit Record and Commentary,” summarized the key points: “Robotic dexterous hands are a critical area of research, with core challenges lying in achieving sufficient flexibility and pressure sensing…” I could even follow up: “What was the context of that discussion?” And it would continue answering based on surrounding context.

This is “conversational” information processing.

It demands that tools adapt to human habits—not the other way around.

That’s also how I handle fleeting inspirations.

One night, just before sleep, a sudden thought struck me: Many small AI startups I met today seem to fit perfectly into the “perception-decision-action” framework.

But I really didn’t want to turn on the light and grab my phone. I’m already sleep-deprived. Going through that routine might keep me awake.

So I reached back and pressed the button on the DingTalk A1 Voice Recording Card on my phone. A soft “buzz”—I knew it had started listening. Eyes closed, I whispered: “AI startups, perception, decision, action.” Then went back to sleep.

The next day, that idea was already sitting in my “Inspiration” folder as a voice note.

Alright. Reading, recording, organizing, retrieving… All preparation leads to one final step: output.

So how do these pieces of knowledge become a logically clear article?

05 Scaffolded Writing

I believe almost every writer has faced the blank page dilemma—staring at an empty document, unsure how to begin. Sometimes due to lack of ideas. Other times because of overwhelming thoughts—so much to say, yet no idea where to start.

AI excels at solving the latter.

Why?

Because many people’s writing process is essentially “scaffolded writing.”

Like building a house. Your mind holds countless inspirations, case studies, and data points—these are bricks scattered across a construction site. What you ultimately deliver is a structurally sound, beautifully furnished house. The most labor-intensive part between bricks and house? Building the scaffolding—the article outline and logical framework.

In the past, we built this scaffolding ourselves—welding steel beam by steel beam. All the bricks rotated endlessly in our heads. Which goes first? Which follows? Everything mattered.

Now, this manual work can be partially delegated to AI.

Of course, AI won’t and can’t replace me as the chief architect. The soul of the house—unique perspectives, deep insights, emotional resonance, clever storytelling—must come from me. But AI can still provide significant support.

For example, before writing, I can feed all my “digital twins” to AI. Then, through DingTalk, I issue a command—not just “summarize,” but: “Act as a senior business analyst. Analyze these CES audio recordings comprehensively. Extract 20 most important insights, and generate a detailed article outline structured by technological trends, business applications, and future implications. Under each insight, list 2–3 supporting examples or data points.”

Soon, AI delivers a scaffold—an initial outline. It may be rough. Some points may need refinement. But it removes the agony of going from zero to one.

From there, I iterate and refine until satisfied. Then, I fill in arguments, polish phrases, infuse emotion…

Yes.

AI cannot replace my thinking.

But it drastically shortens the gap between “thinking” and “expressing.”

And I can dedicate more energy to “creation,” rather than “organization.”

Final Thoughts

From node-based input, to digital twin method, high-fidelity communication, conversational retrieval, and scaffolded writing—

This is my workflow on the “Wen Dao Global” journey.

You could also see it as an efficient learning system.

The key is to free your brain from the heavy lifting of “memorization” and “organization,” so it can focus on meaningful “thinking” and “creation.”

The DingTalk A1 Voice Recording Card is the physical embodiment of this workflow during the trip. So today, I’ve shared it with you as a case study, hoping it inspires you.

Looking at the DingTalk A1 Voice Recording Card in my hand and recalling the scenes at CES, I finally understand why so many say 2026 might be the breakout year for AI hardware.

Because the capability of large AI models is like electricity in the grid—ubiquitous and full of potential. But you need various “appliances” to actually use it.

AI hardware is precisely such an appliance—it brings AI capabilities into our daily lives.

That’s why at CES, real-time translation AI glasses, health-monitoring smart rings, AI companions that learn pet behaviors—are no longer distant concepts. Sensors, chips, batteries—core technologies—are maturing rapidly. This year, CES’s theme shifted from AI as software to AI as physical entities.

Products like the DingTalk A1 Voice Recording Card represent just one wave in this trend. It’s not just a product—it’s a sign of a larger shift: AI is growing limbs and stepping into real life.

In the future, such AI hardware will help professionals across fields break free from repetitive tasks, empowering them to pursue more creative work.

But the ceiling of any tool is ultimately defined by the user’s imagination.

My own imagination is very limited. That’s why we’ve partnered with DingTalk to offer five brand-new DingTalk A1 Voice Recording Cards. I’m genuinely curious: in what scenarios would you use it? What problems would you solve? Please share in the comments below. Also, feel free to interact with us through likes, shares, and reads.

By 6:00 PM on January 19th, we’ll select the top five commenters with the highest number of likes on their “scenario + problem-solving” entries, and send them these five DingTalk A1 Voice Recording Cards.

Alright, my friend.

A new year has begun.

May you continue learning efficiently and evolving powerfully in the year ahead.

Keep going.

Insights / Liu Run Lead Writer / Er Man Editor / Ge Ping Layout / Huang Jing

We dedicated to serving clients with professional DingTalk solutions. If you'd like to learn more about DingTalk platform applications, feel free to contact our online customer service or email at This email address is being protected from spambots. You need JavaScript enabled to view it.. With a skilled development and operations team and extensive market experience, we’re ready to deliver expert DingTalk services and solutions tailored to your needs!

Using DingTalk: Before & After

Before

  • × Team Chaos: Team members are all busy with their own tasks, standards are inconsistent, and the more communication there is, the more chaotic things become, leading to decreased motivation.
  • × Info Silos: Important information is scattered across WhatsApp/group chats, emails, Excel spreadsheets, and numerous apps, often resulting in lost, missed, or misdirected messages.
  • × Manual Workflow: Tasks are still handled manually: approvals, scheduling, repair requests, store visits, and reports are all slow, hindering frontline responsiveness.
  • × Admin Burden: Clocking in, leave requests, overtime, and payroll are handled in different systems or calculated using spreadsheets, leading to time-consuming statistics and errors.

After

  • Unified Platform: By using a unified platform to bring people and tasks together, communication flows smoothly, collaboration improves, and turnover rates are more easily reduced.
  • Official Channel: Information has an "official channel": whoever is entitled to see it can see it, it can be tracked and reviewed, and there's no fear of messages being skipped.
  • Digital Agility: Processes run online: approvals are faster, tasks are clearer, and store/on-site feedback is more timely, directly improving overall efficiency.
  • Automated HR: Clocking in, leave requests, and overtime are automatically summarized, and attendance reports can be exported with one click for easy payroll calculation.

Operate smarter, spend less

Streamline ops, reduce costs, and keep HQ and frontline in sync—all in one platform.

9.5x

Operational efficiency

72%

Cost savings

35%

Faster team syncs

Want to a Free Trial? Please book our Demo meeting with our AI specilist as below link:
https://www.dingtalk-global.com/contact

WhatsApp