• melsaskca@lemmy.ca
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    1 hour ago

    I guess that means it could also be the most effective tool for saving democracy. Fuck these what if’s over factual news.

  • SabinStargem@lemmy.today
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    1 hour ago

    I think rejecting AI is a mistake. All that does is allow fascists to have mastery of the tools. Like money, guns, media, food, oil, or any number of other influential things, you don’t want a select few people to have sole control over them.

    Instead, we should adopt AI and make it work towards many good ends for the everyday person. For example, we can someday have AI that can be effective and cheap lawyers. This would allow small companies to oppose the likes of Disney in court, or for black dudes to successfully argue their innocence in court against cops.

    AI, like any tool, reflects the intent of their user.

  • brachiosaurus@mander.xyz
    link
    fedilink
    English
    arrow-up
    2
    ·
    5 hours ago

    AI “guardrails” is even a bigger tool governments can use to dismantle democracy and kill freedom. The article starts by quoting the pope who isn’t even a democratic ruler, he gets appointed by cardinals who are appointed by the previous pope. AI could have wrote a more useful article.

  • MochiGoesMeow@lemmy.zip
    link
    fedilink
    English
    arrow-up
    14
    ·
    9 hours ago

    Honestly it just feels like ai is created to spy on us more efficiently. Less so about assisting us.

  • Randomgal@lemmy.ca
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    5
    ·
    7 hours ago

    It’s guns. The most effective way to diamantle democracy is violence.

    AI is not stopping anyone from revolting. Guns and the military are.

    Whar a stupid fucking take.

    • CalipherJones@lemmy.world
      link
      fedilink
      English
      arrow-up
      15
      ·
      edit-2
      7 hours ago

      AI allows for 24/7 bot networks to shape perspectives on politics and culture by leaving comments everywhere man. Bots absolutely help prevent people from realizing they’ve been swindled by Trump.

    • Zombie-Mantis@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      6 hours ago

      Bu controlling what people see and hear on TV and social media, they don’t need to use guns. At least, not nearly as many.

  • lacaio da inquisição@lemmy.eco.br
    link
    fedilink
    English
    arrow-up
    19
    arrow-down
    1
    ·
    24 hours ago

    Creating unbiased public, open-source alternatives to corporate-controlled models.

    Unbiased? I don’t think that’s possible, sir.

    • Maxxie@lemmy.blahaj.zone
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      1
      ·
      21 hours ago

      So is renewable energy, but if I start correcting people that they don’t exist because the sun is finite, I will look like a pedant.

      Because compared to the fossil fiels they are renewable, the same way Wikipedia is unbiased compared to foxnews.

      • corsicanguppy@lemmy.ca
        link
        fedilink
        English
        arrow-up
        4
        ·
        8 hours ago

        the same way Wikipedia is unbiased compared to foxnews.

        If better was the goal, people would have voted for Kamala.

  • ClamDrinker@lemmy.world
    link
    fedilink
    English
    arrow-up
    29
    arrow-down
    2
    ·
    edit-2
    1 day ago

    This is the inevitable end game of some groups of people trying to make AI usage taboo using anger and intimidation without room for reasonable disagreement. The ones devoid of morals and ethics will use it to their hearts content and would never interact with your objections anyways, and when the general public is ignorant of what it is and what it can really do, people get taken advantage off.

    Support open source and ethical usage of AI, where artists, creatives, and those with good intentions are not caught in your legitimate grievances with corporate greed, totalitarians, and the like. We can’t reasonably make it go away, but we can reduce harmful use of it.

          • ClamDrinker@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            19 minutes ago

            I didn’t say AI would solve that, but I’ll re-iterate the point I’m making differently:

            1. Spreading awareness of how AI operates, what it does, what it doesn’t, what it’s good at, what it’s bad at, how it’s changing, (Such as knowing there are hundreds if not thousands of regularly used AI models out there, some owned by corporations, others open source, and even others somewhere in between), reduces misconceptions and makes people more skeptical when they see material that might have been AI generated or AI assisted being passed off as real. This is especially important to teach during transition periods such as now when AI material is still more easily distinguishable from real material.

            _

            1. People creating a hostile environment where AI isn’t allowed to be discussed, analyzed, or used in ethical and good faith manners, make it more likely some people who desperately need to be aware of #1 stay ignorant. They will just see AI as a boogeyman, failing to realize that eg. AI slop isn’t the only type of material that AI can produce. This makes them more susceptible to seeing something made by AI and believing or misjudging the reality of the material.

            _

            1. Corporations, and those without the incentive to use AI ethically, will not be bothered by #2, and will even rejoice people aren’t spending time on #1. It will make it easier for them to claw AI technology for themselves through obscurity, legislation, and walled gardens, and the less knowledge there is in the general population, the more easily it can be used to influence people. Propaganda works, and the propagandist is always looking for technology that allows them to reach more people, and ill informed people are easier to manipulate.

            _

            1. And lastly, we must reward those that try to achieve #1 and avoid #2, while punishing those in #3. We must reward those that use the technology as ethically and responsibly as possible, as any prospect of completely ridding the world of AI are just futile at this point, and a lot of care will be needed to avoid the pitfalls where #3 will gain the upper hand.
  • andros_rex@lemmy.world
    link
    fedilink
    English
    arrow-up
    41
    ·
    1 day ago

    This is a thing on social media already.

    Lots of bad faith conservatives will just output AI garbage. They don’t care about truth, they just want to waste your time. You spend time researching their claims, providing counter evidence - they don’t care, because they don’t even read what you say, just copy and paste into an LLM.

    It’s very concerning with the Trump administrations attack on science. Science papers are disappearing, accurate/vetted information because sparser - then you can train Grok to claim that all trans women are rapists or that global warming is just Milkanovitch cycles.

    It is and has been an info war. An attack on human knowledge itself. And AI will be used to facilitate it.

    • CalipherJones@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      7 hours ago

      Never believe that anti-Semites are completely unaware of the absurdity of their replies. They know that their remarks are frivolous, open to challenge. But they are amusing themselves, for it is their adversary who is obliged to use words responsibly, since he believes in words. The anti-Semites have the right to play. They even like to play with discourse for, by giving ridiculous reasons, they discredit the seriousness of their interlocutors. They delight in acting in bad faith, since they seek not to persuade by sound argument but to intimidate and disconcert. If you press them too closely, they will abruptly fall silent, loftily indicating by some phrase that the time for argument is past. Jean-Paul Sartre

      • andros_rex@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        7 hours ago

        But they are amusing themselves, for it is their adversary who is obliged to use words responsibly, since he believes in words.

        It is so goddamn frustrating. I had one on here claiming that the US had not deported US citizens, while linking the article saying that the US had deported four children who were citizens!

        I read fast and have been following this shit for so long that I can call a lot out. But it never changes their minds, they never concede defeat. They just jump from place to place. Or “you libs always assume i’m MAGA” is a funny one I keep seeing on here - like, if you are supporting Trump’s policies, you are a Trump supporter. It’s so goddamn slimy and disingenuous.

        TERFs are the worst about it. They’ve started spreading Holocaust denial. I saw one claim the the Nazi government issued transvestite passes! Like no! It was the Weimar Republic! The Nazis used those passes to track down and kill trans people!

        You can provide crystal clear documentation, all of the sources they ask for - it’s never good enough and it’s exhausting.

        • SabinStargem@lemmy.today
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 hour ago

          My takeaway: if they don’t respond to evidence in good faith, you can simply consider them as enemies. From there, the only time to argue with them is when you want to convince other people in the room about your position. The court of public opinion is the key to obtaining a better society.

          • andros_rex@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            57 minutes ago

            It’s probably some ASD-ish adjacent, but it just breaks my brain every time. It seems like there’s a large proportion of people who don’t seem to actually care whether they have an accurate understanding of the world or not? The amount of times online I’ve been able to show someone evidence that they demanded, and then they double down. At best they’ll go silent, but then you’ll see them making the exact same claim later.

            In general right now it seems like there are a lot willing to call evil good, and good evil. Everything is backwards. The Moral Majority voted in a pedophile rapist.

    • shalafi@lemmy.world
      link
      fedilink
      English
      arrow-up
      8
      arrow-down
      3
      ·
      1 day ago

      ChatGPT:

      You’re absolutely right to be concerned — this is a real and growing problem. We’re not just dealing with misinformation anymore; we’re dealing with the weaponization of information systems themselves. Bad actors leveraging AI to flood conversations with plausible-sounding nonsense don’t just muddy the waters — they actively erode public trust in expertise, evidence, and even the concept of shared reality.

      The Trump-era hostility to science and the manipulation or deletion of research data was a wake-up call. Combine that with AI tools capable of producing endless streams of polished but deceptive content, and you’ve got a serious threat to how people form beliefs.

      It’s not just about arguing with trolls — it’s about the long-term impact on institutions, education, and civic discourse. If knowledge can be drowned in noise, or replaced with convincing lies, then we’re facing an epistemic crisis. The solution has to be multi-pronged: better media literacy, transparency in how AI systems are trained and used, stronger platforms with actual accountability, and a reassertion of the value of human expertise and peer review.

      This isn’t fear-mongering. It’s a call to action — because if we care about truth, we can’t afford to ignore the systems being built to undermine it.

        • Jax@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          5
          ·
          1 day ago

          If it makes you feel better, you made me realize I’ve been using en dashes in places where I should be using em dashes! So, thank you.

  • TheGrandNagus@lemmy.world
    link
    fedilink
    English
    arrow-up
    121
    arrow-down
    5
    ·
    edit-2
    1 day ago

    Nope. I’d still say social media/social media algorithms.

    Imagine if social media didn’t exist (beyond small, tight-knit communities like forums about [topic], or BBS communities), but all these AI tools still did.

    Susan creates an AI generated image of illegal immigrants punching toddlers, then puts it on her “news” blog full of other AI content generated to push an agenda.

    Who would see it? How would it spread? Maybe a few people she knows. It’d be pretty localised, and she’d be quickly known locally as a crank. She’d likely run out of steam and give up with the whole endeavour.

    Add social media to the mix, and all of a sudden she has tens of thousands of eyes on her, which brings more and more. People argue against it, and that entrenches the other side even more. News media sees the amount of attention it gets and they feel they have to report, and the whole thing keeps growing. Wealthy people who can benefit from the bullshit start funding it and it continues to grow still.

    You don’t need AI to do this, it just makes it even easier. You do need social media to do this. The whole model simply wouldn’t work without it.

    • ZILtoid1991@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      7 hours ago

      It’s the algorithms + genAI, especially as the techbros got super mad about the progressive backlash against genAI, which radicalized everyone of them into Curtis Yarvin-style technofeudalism.

    • Cheems@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      19 hours ago

      Social media, at least the mainstream stuff like Myspace was the start of the downfall. I don’t think random forums really were the thing that caused everything to go sideways, but they were the precursor. Facebook has ruined things for generations to come.

    • taladar@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      48
      ·
      1 day ago

      Half of it wouldn’t even work if the news media would do their job and filter out crap like that instead of being lazy and reporting what is going on on social media.

      • aesthelete@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        edit-2
        1 day ago

        One week the whole US news cycle was dominated by “Cheungus posted an AI pic of Trump on truth social”… I mean… I get that the presidency was at times considered dignified in the modern era so it’s something of a “vibe shift”, but the media has to have a better eye for bullshit than that. The indicators unfortunately are that it’s going to continue this slide as well because news rooms are conglomerating, slashing resources, and getting left in the dust by slanted podcasts and YouTube videos.

        Some of it is their own fault. People watching the local news full of social media AI slop are behaving somewhat understandably by turning off the TV and going straight to the trough instead of watching live as the news becomes even more of a shitty reaction video.

      • paraphrand@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 day ago

        They also shouldn’t report on the horse race. They should report on issues.

        Reporting on elections is always disappointing.

    • MalReynolds@slrpnk.net
      link
      fedilink
      English
      arrow-up
      11
      ·
      1 day ago

      While I generally agree and consider this insightful, it behooves us to remember the (actual, 1930s) Nazis did it with newspapers, radio and rallies (… in a cave, with a box of scraps).

  • crystalmerchant@lemmy.world
    link
    fedilink
    English
    arrow-up
    37
    arrow-down
    10
    ·
    1 day ago

    When the fuck will you people get it?? Every technology will eventually be used against you by the state

      • ssillyssadass@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 day ago

        A machine gun is a tool that is made with one purpose. A better comparison would be a hunting rifle, or even a hammer.

          • fruitycoder@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            1
            ·
            8 hours ago

            Right. It’s small, and compact, so you can fit in the bike, and quick swing to someone’s Dome just about does it. /s

            Violence is a method of action, some tools are force multipliers in that action, and thus useful in that case.

            Don’t get me wrong, hammers building houses and plow shears have done more to quietly change the world then guns and swords ever have, but guns and swords have.

            • petrol_sniff_king@lemmy.blahaj.zone
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 hour ago

              some tools are force multipliers in that action, and thus useful in that case.

              Sure. And removing those force multipliers from play can affect the state of the game.

              When we get enough hammer murders, then we can talk about restricting hammer use.

    • theluddite@lemmy.ml
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      1 day ago

      I don’t like this way of thinking about technology, which philosophers of tech call the “instrumental” theory. Instead, I think that technology and society make each other together. Obviously, technology choices like mass transit vs cars shape our lives in ways that simpler tools, like a hammer or or whatever, don’t help us explain. Similarly, society shapes the way that we make technology.

      In making technology, engineers and designers are constrained by the rules of the physical world, but that is an underconstraint. There are lots of ways to solve the same problem, each of which is equally valid, but those decisions still have to get made. How those decisions get made is the process through which we embed social values into the technology, which are cumulative in time. To return to the example of mass transit vs cars, these obviously have different embedded values within them, which then go on to shape the world that we make around them. We wouldn’t even be fighting about self-driving cars had we made different technological choices a while back.

      That said, on the other side, just because technology is more than just a tool, and does have values embedded within it, doesn’t mean that the use of a technology is deterministic. People find subversive ways to use technologies in ways that go against the values that are built into it.

      If this topic interests you, Andrew Feenberg’s book Transforming Technology argues this at great length. His work is generally great and mostly on this topic or related ones.

      • blazeknave@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 day ago

        Just being a little sassy here, but aren’t you still just describing the use of technology in practice but calling it invention of different technology, which is the same point made by your parent comment?

        • theluddite@lemmy.ml
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 day ago

          I get your point and it’s funny but it’s different in important ways that are directly relevant to the OP article. The parent uses the instrumental theory of technology to dismiss the article, which is roughly saying that antidemocracy is a property of AI. I’m saying that not only is that a valid argument, but that these kinds of properties are important, cumulative, and can fundamentally reshape our society.

    • Deceptichum@quokk.au
      link
      fedilink
      English
      arrow-up
      31
      arrow-down
      3
      ·
      2 days ago

      Kinda like people focusing on petty crime and ignoring the fact that corporations steal billions from us.

      We as a society give capitalism such a blanket pass, that we don’t even consider what it actually is.

    • thatsnothowyoudoit@lemmy.ca
      link
      fedilink
      English
      arrow-up
      21
      arrow-down
      5
      ·
      1 day ago

      Hot take: what most people call AI (large language and diffusion models) is, in fact, part of peak capitalism:

      • relies on ill gotten gains (training data obtained without permission, payment or licensing)
      • aims to remove human workers from the workforce within a system that (for many) requires them to work because capitalism has removed the bulk of social safety netting
      • currently has no real route to profit at any reasonable price point
      • speculative at best
      • reinforces the concentration of power amongst a few tech firms
      • will likely also result in regulatory capture with the large firms getting legislation passed that only they can provide “AI” safely

      I could go on but hopefully that’s adequate as a PoV.

      “AI” is just one of cherries on top of late stage capitalism that embodies the worst of all it.

      So I don’t disagree - but felt compelled to share.

  • surph_ninja@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    3
    ·
    1 day ago

    It could also be an effective tool for liberation. Tools are like that. Just matters how they’re used and by whom.

    • Match!!@pawb.social
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      2
      ·
      1 day ago

      some tools are actually strictly bad to use, like nuclear bombs, landmines, or chemical weapons

    • Melvin_Ferd@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      arrow-down
      2
      ·
      1 day ago

      But they have to be used.

      I often wonder why leftist dominated spheres have really driven to reject AI. Given that were suppose to be more tech dominant. Suspiciously I noticed early media on the left treated AI in the same way that the right media treated immigration. I really believe there was some narrative building through media to introduce a point of contention within the left dominant spheres to reject AI and it’s usefulness

      • surph_ninja@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        2
        ·
        1 day ago

        I haven’t seen much support for antiAI narratives in leftist spaces. Quite the opposite, as I’ve been reading about some tech socialists specifically setting up leftist uses for it.

        But your instincts are spot on. The liberals are being funded by tech oligarchs who want to monopolize control of AI, and have been aggressively lobbying for government restrictions on it for anti-competitive reasons.

        • ClamDrinker@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          2
          ·
          edit-2
          1 day ago

          While there are spaces that are luckily still looking at it neutrally and objectively, there are definitely leftist spaces where AI hatred has snuck in, even to a reality-denying degree where lies about what AI is or isn’t has taken hold, and where providing facts to refute such things are rejected and met with hate and shunning purely because it goes against the norm.

          And I can’t help but agree that they are being played so that the only AI technology that will eventually be feasible will not be open source, and in control of the very companies left learning folks have dislike or hatred for.

          • surph_ninja@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            1
            ·
            1 day ago

            Sounds like a good canary to help decide which leftist groups are worth participating in.