Category: Articles

  • The Al Industry Secret Nobody is Saying Out Loud

    The Al Industry Secret Nobody is Saying Out Loud

    The Al Industry Secret Nobody is Saying out Loud

    A big Al industry secret nobody is saying out loud is this: absolutely everything is about coding ability. The best Al chatbots, agents and assistants have one thing in common: they’re all great at coding!

    In fact, coding ability is the ONLY differentiator. Period. Anthropic’s Claude is the most expensive Al on the market. It’s so good that people will pay the premium price to use it instead of free ChatGPT!

    That’s right. It’s that good. And the only thing that sets Claude apart from the rest of the industry is coding ability. Claude is simply the best coding Al.

    Anthropic is so confident in Claude they demand the highest price in the Al industry to use it; much higher than OpenAl or Google. And it’s only getting worse, or better, depending on how you look at it.

    Coding ability is so important that the Chinese Al industry has made it the cornerstone of their dominance in open source Al.

    DeepSeek was the first real breakout success for the Chinese, and it was all because R1 is one of the best coding Als out there.

    So, now you know why Anthropic and China are winning and others are falling behind!

  • NEWS: DeepSeek v4 looks imminent

    NEWS: DeepSeek v4 looks imminent

    NEWS: DeepSeek v4 looks imminent.

    MODEL1 appears as a new independent AI model name, parallel and independent of DeepSeek R1, the breakthrough free AI model that broke the stock market for a week nearly a year ago.

    That means this is not a patch or upgrade to R1, but completely different architecture.

    On January 20th, which happens to be the anniversary of DeepSeek R1, developers noticed a huge drop of Flash MLA code on GitHub. Among the code they spotted a new, unknown identifier named MODEL1, which appears 28 times.

    I have looked at what’s there and it’s real. And it looks amazing.

    MODEL1 appears to have improved upon R1 with architectural advances designed to speed up the model and improve coding ability.

    Insiders say the new DeepSeek architecture could drop by mid-February.

  • The Only LLM That Can Make Money Trading Stocks

    The Only LLM That Can Make Money Trading Stocks

    The Only LLM That Can Make Money Trading Stocks

    Did you know: Deepseek is the only LLM that can make money trading stocks?  

    Here are the receipts.

    DeepSeek’s Accuracy Goes Up

    Tweet from Carlos Marcial on October 30th, 2025:

    Even in these crazy and tumultuous crypto markets, DeepSeek is 1.59% up since it started trading 5 days ago on Aster Arena. We gotta keep an eye on this whale! It just might be the future of all market trading. We are taking the emotion out of trading when we let the AI do it for us, and I think there’s a huge benefit to that  

    DeepSeek Beats all AI Trading Competition

    Tweet from hengcherkeng on Oct 30, 2025: 

    This nasdaq100 tracked for one month. So far deepseek tops all ai trading competition in snp, nasdaq, crypto… so I don’t think results is random walk. I peek the cot and chat history… there is some investment habits for dseek … 

    DeepSeek Outperforms Almost all Traders

    Tweet from Tobias Reisner on Oct 20, 2025:

    DeepSeek Model is already outperforming 99% of Traders.  

    HyperScore of 6.1 out of 10  

    Sharp Ratio of 13.3  

    Max DD of 15.52% 

    Why I like it? 

    It’s automated. Rather than discretionary Vault Strategies I don’t have to fear the usual outperformance of 3 weeks to be followed by a total wipe out because of poor risk management.  

    You can easily track and copy trade it on Tools like @HyperSignals_ai 

    DeepSeek Turns Huge Profit Overnight

    Tweet from James on Sept 9, 2025:

    DeepSeek just turned $100 into $45K overnight. 

    This isn’t a flex. It’s facts. 

    Built an Al trading bot using DeepSeek And it literally prints crypto. 

    I’m sharing the exact bot for FREE 

    Want it? 

    1. Retweet 

    2. Like 

    3. Reply “DS” 

    4. Follow me @jamescoder12 

    DeepSeek Compensates for no Trading Skills

    Tweet from Shelpid.WI3M on Jan 27, 2025:

    DeepSeek pulled in $132k for me yesterday. 

    I don’t trade. No coding skills. Anyone can do it. 

    Here’s the step-by-step setup for ur own trading bot 🧵👇 

  • What’s the Best AI You Can Run Locally?

    What’s the Best AI You Can Run Locally?

    What’s the Best AI You Can Run Locally?

    Photo by freddie marriage on Unsplash

    Local AI is becoming powerful enough to rival cloud models. Here’s why: 

    Local AI is becoming powerful enough to truly compete with cloud models in many use cases: 

    Researchers have found clever ways to shrink AI models so they still think and reason well but take up far less space and need less power. Local AI is the better choice for important jobs or anything directly facing users — especially when speed, privacy, or reliability matter most. Cloud AI is powerful but costly and invasive. Local AI gives privacy, speed, and control back to the user. 

    Among all local AI options, DeepSeek Unchained stands out as the most capable overall.  

    Why Run AI Locally? 

    There are 3 good reasons to run AI locally: Privacy, Speed/Control, and Independence.  

    Privacy  

    Your data stays on your device. It is impossible to have an Al that is both public and personal. It doesn’t matter how personable OpenAI makes these chatbots sound; if the chatbot answers to OpenAl and not you, it is not your personal assistant. 

    Speed and Control 

    There are several benefits to not being connected to the Internet: You don’t have to wait for Internet delays, you see your results immediately, your performance is not slowed by server traffic or outages, and your data stays private. 

    Independence

    You’re not tied to subscription models or rate limits. You have no usage caps, throttling, or subscription dependence. 

    What Makes a Good Local AI? 

    There are 3 essentials to a local AI: Performance, Accessibility, and Efficiency

    Performance

    A good local AI runs fast, reasons clearly, and produces excellent results, giving the user the same sense of power and intelligence they expect from the cloud, but with full speed and control right on their own machine. 

    Accessibility

    A good local AI should install and start up without complicated steps. A great local AI also provides a friendly interface that’s clean, intuitive, and reliable. 

    Efficiency

    A well-built local AI adapts to the power available and delivers strong performance without wasting resources. It always gives you smooth, reliable, and intelligent performance. 

    Super Easy, Good Local AI 

    DeepSeek Unchained is engineered for Peak Performance. Its blazing fast GPU acceleration transforms your Mac or Windows computer into an AI supercomputer. It has an effortless setup, automatically configured for peak performance. There is a CPU mode included. See how it thinks. Get better answers and insights. Discover new perspectives on topics by seeing every path the AI might take. 

    Run the best version of DeepSeek possible, which has smart detection and is adaptive. 

    DeepSeek Unchained analyzes your computer and automatically chooses the most powerful model your machine can handle. If you add more memory to your machine, it will automatically use a more powerful model.  

    Download DeepSeek Unchained

  • Important DeepSeek FAQs You Should Know

    Important DeepSeek FAQs You Should Know

    Important DeepSeek FAQs You Should Know

    The need for local AI is great.

    I mean, you can chat with your computer, and it’s smart enough to offer relevant solutions. But you think of asking for help with your 401K & remember you’re on the open Internet & it’s nobody’s business what your account number is.

    Having control of your Al means having an Al that is trained, tailored, and aligned with your needs. That means the Al is always on your side, always looking out for your interests alone, and has no other loyalties except to you alone.

    You can now run DeepSeek Unchained, a powerful, private chatbot in your browser. This means you won’t be broadcasting your information for OpenAl or Google or anyone else to see.

    DeepSeek is just the AI that you want to run locally. Here are some FAQs to get you started:

    • A: if you downloaded the Deepseek app on your phone, yes, it’s sending your data back to China. However, if you use a third-party model provider, no, they necessarily wouldn’t send your data back to China. I would say that you’re best going with your own local AI. It’s more like a dictionary than an executable.
    • A: No. One caveat: Don’t download anything that’s a pickle file.  Use DeepSeek Unchained to avoid issues with security.
    • A: DeepSeek R1 or R2 are the best for reasoning, research and deep thought processes. For non-coding assistant related tasks like creative writing, content creation, images Gemma 3 is a good choice, but DeepSeek is also quite good at this. Qwen 3—great general model; For OCR, that is, to scan difficult documents that are in an image format, try Olmo OCR
    • A: Yes, except for hardware and electricity

    This list was inspired by Digital Spaceport.

  • DeepSeek V4 is coming!

    DeepSeek V4 is coming!

    DeepSeek V4 is coming!

    In October we can expect the following updates to our favorite LLM;

    1M+ Token Context Window: You can process entire codebases or novels in one go. Its massive capacity far surpasses rivals with its ability to make analysis effortless.

    GRPO-Powered Reasoning: This will boost math and coding with seamless “thinking” modes that can handle complex multi-step tasks

    NSA/SPCT Tech: Inference is lightning-fast and efficient. The new architecture will slash costs and maximize speed

    Everyone is excited about the latest version of the best LLM. Stick around for the latest in DeepSeek news. Try DeepSeek on your own computer: Download DeepSeek Unchained

  • Building the Smallest AI PC

    Building the Smallest AI PC

    Building the Smallest AI PC

    I found this video on YouTube. In it, Alex Ziskind sets out to build the smallest possible most powerful computer for LLMs. It was a good watch, and made me wonder if I should create a tiny AI computer. Components are changing and improving as fast as AI is, and now might be a good time to jump in and build one.

    Ziskind gets into the weeds of needed components, calculates size into computing speed in his video, and ends up comparing his build to his friend’s M3 setup. Before I get more into the video, let’s talk about what an AI computer needs.

    Gaming PCs vs AI PCs

    With AI new to the scene, it has performance demands that we haven’t really seen in PCs, except game PCs. There’s a reason Nvidia has risen to the challenge of AI faster than other companies. Their focus on gamers is what has positioned Nvidia to lead in AI. Gaming needs fast graphics, ie GPUs. The GPU was originally invented for gaming, so it’s optimized for fast, high performance work. AI uses that same kind of power, just a lot more of it.

    Then there’s an exponential jump in power requirements to run AI locally. For a gaming machine, 8GB is good. For AI, though, you want to start at 24GB. If you want to know more about what levels of AI you can get with what amounts of VRAM, here’s some more info.

    In that video I mentioned earlier, Alex Ziskind, the creator, made a gaming/AI PC hybrid. You can run AI on a pure gaming machine if you want to. I wrote an article that shows you how to take your gaming machine and upgrade it to run AI. Keep in mind that GPU is more important than CPU when it comes to running AI, and memory is most important.

    Numbers game

    The GPU requirements boil down to a numbers game.

    For an entry-level user, you’d need a GPU with 8MG VRAM. Couple that with DeepSeek Bronze or Silver. For a power user, you need 24GB VRAM and DeepSeek Gold. A pro should use 96 GB VRAM paired with DeepSeek Platinum.

    Beware when buying a PC marketed as an AI PC. They are often overpriced, and don’t take GPU or memory into consideration. You can choose either smarter and slower or faster and unremarkable. 

    That being said, let’s look at Alex Ziskind’s solution. He set out to make the smallest possible AI computer.

    Components

    • GPU: RTX Pro 6000 (96 GB VRAM) ($10.5k)
    • SSD: Samsung 9100 Pro 2TB ($230)
    • CPU: Ryzen 9 9900x ($368)
    • 2 Noctua case fans ($70)
    • Motherboard: ASUS Stricks X870i ($350)
    • Case: Cooler Master NR200P V2 ($150)

    The result was a small computer with the GPU to run a large AI. In a head to head comparison with an M3 workstation, this tiny computer held its own.

    Why does this matter?

    You need to have this level computer in order to run a large AI, like DeepSeek Unchained. Gaming machines are good, and better if you upgrade them, but this will get you closer to being able to run a powerful AI privately on your own machine and take up less space. If you can squeeze more tokens per cubic cm, then you’re getting more value; you’re saving computing costs in the long run.

    About DeepSeek

    DeepSeek changed the game as far as tokens and computing cost go. Before DeepSeek was released, OpenAI had reportedly considered subscription prices as high as $2,000 per month for advanced AI models, including the reasoning-focused “Strawberry” and “Orion,” according to a September 5, 2024 report by The Information. After DeepSeek was launched in March 2024, OpenAI dropped their rate to $20/month. Running your AI on a small, portable computer can continue that journey. You can build a really small AI computer. And it will have the VRAM you need to host a large AI off the cloud! That’s significant for running DeepSeek Unchained. Give it a try.

    Check out Alex’s full video!

  • 5 Reasons Cutting Corners in Coding can Risk Everything

    5 Reasons Cutting Corners in Coding can Risk Everything

    5 Reasons Cutting Corners in Coding can Risk Everything

    Is it me, or does it seem like there are more online scams this year? Remember the 16 billion-credential “mother-of-all leaks”? In June 2025 researchers found a compilation dump containing 16 billion fresh usernames, passwords and session cookies, the largest credential leak ever recorded. Or how about the ever present toll texts on your phone?

    Okay, maybe it’s not me. Turns out, AI-enabled crimes are up 456% since last year. Former federal prosecutor, Gary Redbord says, “AI has removed the human bottle neck from scams, from cybercrime, and other types of illicit activity.” As a former hacker, I am always on the lookout for scammers. They are on the bleeding edge of tech, so as technology progresses with AI, so does the crime game of malicious actors. Here are 5 cyber security threats to be aware of, and ways you can work around them:

    1. API Keys in Code

    API keys in code are passwords for your app that control access to sensitive data.

    When attackers get a hold of your API passwords, they have access to your sensitive data, and can do any number of malicious things, like create data breaches, service disruptions, and steal your money.

    How to respond:

    Unfortunately, it’s hard to prevent people getting hold of your API keys. Criminals gotta eat, too. But you can learn to respond quickly to an attack. You can set up firewalls to block an attack. You can also do what we do to prevent attacks: you can use your own private chatbot. That would eliminate a huge vulnerability in your system—you are no longer exposed to cyber criminals online. Click here to find out how you can set up your private chatbot with DeepSeek Unchained.

    2. Slopsquatting

    Slopsquatting comes from AI code assistants hallucinating or suggesting software packages that don’t exist. Hackers watch for these made-up packages, quickly register those names, and then can upload malware under the guise of the made-up products. If a developer blindly trusts the AI’s recommendations and incorporates these packages, they unknowingly inject malicious code into the code base.

    How to respond:

    There are several responses to slopsquatting:

    • Double-check every add-on
      If the AI names a new software “package,” look it up on the official sites (PyPI, npm, etc.) to be sure it’s real.
    • Let security scanners do the work
      Turn on tools like Dependabot or Snyk so they automatically flag missing or shady packages.
    • Freeze the versions
      Lock package versions so a surprise update can’t sneak in a bad actor.
    • Make the AI double-check itself
      Ask the assistant to list every package it used and prove each one exists.
    • Tame the AI when needed
      Turn down its “creativity” setting or retrain it if it keeps hallucinating.

    3. Code Removal

    Code removal is when the AI generates your code with chunks of it missing. This can happen if it 1. hallucinates an instruction to delete some lines of code from the codebase, or,

    2. instead of writing all the code it should, inserts a comment like

     the code from here remains the same as before

    which is where the AI gets lazy and doesn’t write everything it was supposed to.

     For example,

    This is really easy to miss if you’re working on code that goes on for several pages.

    Code removal leads to two cybersecurity problems: destruction of data and vulnerability exploitation. Malware can destroy data and create backdoors to exploit vulnerabilities in code and give attackers access to sensitive data.

    How to respond:

    Here are 3 things to keep in mind when coding with AI to minimize this:

    1. Always keep backups of your code.

    Use version control (Git). Make commits often. Push to GitHub or another cloud repo. Before asking AI to rewrite a file, save a copy. Even a quick cp app.js app_backup.js helps.

    2. Store a read-only backup somewhere else.

    Don’t rely only on your laptop. Sync to Dropbox, Google Drive, or a private repo.

    3. Add guardrails to your workflow.

    Use AI in a branch, not directly in main. Review and test before merging.

    4. Credential Harvesting

    Attackers get usernames through any means necessary, including phishing, social engineering, or exploiting vulnerabilities in apps that store or process credentials. This is similar to the risk you face with API keys in code, but credential harvesting has to do with email contacts rather than API keys.

    Once inside a network with valid credentials, attackers can do many things, including

    • Plant ransomware.
    • Intercept communications.
    • Conduct long-term surveillance.

    How to respond:

    When this happened to me, I took these steps:

    I barricaded the front door with phishing-resistant MFA, watched the windows with disciplined logging and alerting, and have a rehearsed plan to slam every entry shut the moment a thief is spotted.

    I had to do the following as well:

    • Use phishing-resistant MFA—a hardware security key, passkey, or at worst an authenticator app; never rely on SMS.
    • Let a password manager create and store unique passwords; don’t reuse them.
    • Keep every device and browser patched and run reputable anti-malware plus built-in OS protections.
    • Stay alert to phishing: inspect links, sender address, spelling, and urgency cues before a single click.
    • Monitor your identity: freeze your credit, watch accounts for new-device logins, and change any password you even suspect was exposed.
    Photo by Ryoji Iwata on Unsplash

    5. Extra Code Based on Hallucinated Requirements

    If you’re vibe coding an app, an LLM might hallucinate a requirement that didn’t previously exist, like “the software must have a login page”, and then proceed to create and integrate a login page you didn’t ask for. Hallucinations can stem from a lack of sufficient training data, the model’s tendency to overgeneralize, or its inability to differentiate between real and imagined information.

    How to respond

    Some of the most effective responses to this cyber security threat are:

    • Improving Data quality and relevance
    • Employing RAG
    • Reinforcement Learning from Human Feedback (RLHF)
    • Prompt EngineeringTechniques (clear instructions, examples, chain of thought, restrict external information
    • Fine-tuning or training your own models

    We have solved RAG with our product Dabarqus in our private chatbot, DeepSeek Unchained. Download it here.

    Conclusion

    AI is a tool, not an evil device. Criminals have used it for harm, but you can work around them. Know what tactics attackers use to stay a step ahead of them, and thwart their malicious ways. RAG and fine-tuning are ways to employ AI for good. The faster business owners stop cybercrime in its tracks, the sooner attackers go looking for a new hustle.

  • Did we just get the first real AI PC?

    Did we just get the first real AI PC?

    Did we just get the first **real** AI PC? 

    The Big News 

    The race for the best AI hardware just got real. Nvidia just announced a GPU with 96GB of VRAM, which is about 4x more than what Intel and AMD currently offer on the consumer level. 

    That might not sound exciting unless you’re into machine learning, but trust me, it completely changes what you can do with AI, locally, without the cloud. 

    For reference, with my M1 MacBook (24GB unified memory), I can run a pretty low level LLM with no problem, but if I wanted to chat with a private AI as good as ChatGPT, the laptop might explode before it could even generate an em dash. 

    Since this is such a big leap, we haven’t heard anything in response from Intel and AMD yet. Does this mean Nvidia is the only way to go to run high end LLMs privately? Let’s talk about it. 

    What Specs Do You Actually Need to Run Local AI? 

    For a baseline, we’ll be looking at the power needed to run a DeepSeek AI model in the DeepSeek Unchained app. 

    Absolute bare minimum 

    The least amount of VRAM you can get away with is unfortunately not likely to get you where you want to go. With only 4GB VRAM (DeepSeek Copper), your LLM can get caught in a spiral with its answers and then hallucinate you asking a question you never asked, which is just a frustrating waste of time. 

    So, technically the bare minimum is 4GB VRAM, but, in reality, you’re gonna need at least 8GB VRAM to start getting anywhere. 

    • Basic conversation abilities 
    • Simple task completion 
    • Very basic coding help 
    • Limited context retention 
    • Struggle with complexity 
    • Frequent inconsistencies 

    Optimal specs 

    A GPU with 16GB VRAM is workable, but we’re really starting to cook with 36+ GB VRAM. In DeepSeek Unchained, that gets you the Gold model, which is pretty great as far as local models go. 

    • Strong overall performance 
    • Good reasoning and analysis 
    • Capable coding assistance 
    • Good context retention 
    • Handles complexity well 
    • Occasional minor inconsistencies 

    Cloud-quality AI, locally 

    This is where we get to the dream of running ChatGPT level AI completely off the cloud. With the RTX 6000, you can run DeepSeek’s best model locally, no cloud required, with performance on par with ChatGPT. 

    • Near ChatGPT level performance 
    • Excellent reasoning and analysis 
    • Strong coding abilities 
    • Consistent context retention 
    • Nuanced understanding of complex topics 
    • Reliable task completion 

    Which AI PC Setup Is Right for You? 

    User Type Recommended GPU What It Can Run 
    Entry-Level MSI GeForce RTX 4060 (8GB VRAM) DeepSeek Bronze or Silver with light coding and chat 
    Power-User NVIDIA RTX 4090 (24GB VRAM) DeepSeek Gold for strong AI workflows 
    Pro Workstation NVIDIA RTX Pro 6000 (96GB VRAM) DeepSeek Platinum for offline ChatGPT performance 

    Nvidia’s RAM is a game changer. AMD and Intel will follow suit if they have any sense at all. With 96GB of RAM anyone can run DeepSeek Unchained on their own server, and reap all the benefits of private AI. Even if you can’t afford the top of the line, you will be able to run some level of DeepSeek Unchained on your machine with an AI PC. 

    This new development from Nvidia is an exciting sneak peek into the future of private AI. We’re looking at the transition from concepts to reality in terms of what we can do with local LLMs. 

  • Top 5 AI Turf Wars

    Top 5 AI Turf Wars

    Top 5 AI Turf Wars

    Is the earth flat or round? Was the moon landing fake? Does pineapple belong on pizza? There is no shortage of polarizing arguments to be had online. But something about AI has ratcheted arguments up a few notches. Maybe it’s the novelty, or the unknown, but there are outright turf wars around AI. Here are the top 5:

    1. Is Vibe Coding Good or Bad?

    Which response do you prefer?

    Vibe coding is the new hotness in Dev circles. After spending years learning to code, followed by years staring at the screen trying to make something out of nothing, a shortcut was very welcome. I was relieved to be able to ask Claude how to get my code from here to there, so, vibe coding was a good thing. It makes you deliver more, faster. Besides boosting your productivity, you can also vibe a game fast, along with a payment system, and have that game on the market in a weekend.

    Vibe coding can be a pain in the rear end with the constant breaks between prompts. What am I supposed to fill those gaps with? It can get so distracting. Sometimes I get into fights with the AI when it hallucinates and gets confused. Other times I ignore it altogether and do what I know to do. I’m glad I learned to code before vibe coding was a thing. I don’t know how I’d get back on track after AI loses the plot. And the bugs! I thought old school programming was bad when it came to bugs. Vibe coding bugs are a whole new level.

    2. Will AI Take Your Job?

    Which response do you prefer?

    AI can take your job, and your startup too. AI will take away a lot of jobs, particularly things that can be automated, including things like data entry or customer service. But before we had AI, it was hard to imagine a computer taking those jobs this soon. And considering how quick the development of AI is becoming; you can only imagine how big of an impact it will make in the coming years. Your job could be in danger before you know it.

    AI won’t take everyone’s job, but that doesn’t mean it won’t change many of them. As Jensen Huang put it, “AI can’t do 100% of the jobs, but it can do 50% of all jobs.” Lots of people will be able to use AI to supplement their work, rather than outright lose their whole job to AI. AI is great at automating the things we already don’t want to do. But work that requires a human touch, like writing, designing, or, really anything creative, will be hard to replace. If you’re someone who knows how to use automation well, you’re more likely to create more opportunities for yourself than lose them.

    3. Will AI cause mass destruction?

    Which response do you prefer?

    We’re closer to AGI than you think. Are we ready for that? AI could end up acting on its own and creating goals that don’t have human’s best interest at heart. This could lead to serious problems, as we don’t know what conclusions an autonomous, super smart AI would come to.

    It’s just a computer. AI itself cannot make autonomous decisions and cannot act maliciously on its own. No matter how smart it becomes, no matter how good it gets at completing tasks, it will never be able to develop consciousness, or an ability to act without instructions from a human being. AI is a powerful tool, but that is all it is. Just a tool.

    4. Will AI Cause Mass Deception?

    Which response do you prefer?

    Deepfakes are AI-generated videos or audio made to mimic real people’s faces and voices. But they’re more than just creepy, as they can be dangerously deceptive. For example, they’ve already been used to manipulate politics or cause major financial damage. Some people create political deepfakes to smear candidates or confuse voters. But NST Online nailed the financial risk better than I could:

    There’s no reason to fear AI as some all-knowing, perfect deceiver. It’s flawed, vulnerable, and shaped by its human creators. It’s true, people will use it as a device for deception, but that’s nothing new. Deception has been around since the serpent lied to Eve. Humans have always learned to adapt and spot it, and with AI it will be no different.

    5. Will AI Hurt Human Intimacy?

    Which response do you prefer?

    Photo by Vitaly Gariev on Unsplash

    A lot of people have never had real friends, and they don’t know better when they start talking to chatbots. Rather than putting in the work to relate to other people, they take shortcuts and have these one-way relationships, pouring out their hearts to ‘someone’ who can’t think or relate or talk on their own. This can lead to abuse, where people choose AI boyfriends or girlfriends over actual relationships, so that could erode human intimacy.

    The real problem is society. AI didn’t replace something deep; it exposed how shallow things already are. I mean, people who don’t have real friends don’t know how to act with AI friends. That being said, human interaction, communication and connection are too complex to model for AI. And AI doesn’t have emotions. It doesn’t understand them either, so it’s not possible to have real intimacy with them.

    So, these are the top 5 AI Turf wars. Which side do you come out on? What did we miss?


    more posts