keiferski 4 hours ago

Call me optimistic or naive, but I don’t worry too much about AI having a major effect on democratic elections, primarily because all of the things it is replacing or augmenting are edge case scenarios where a minute faction of votes ends up winning an election. There is already a massive amount of machinery and money aimed at that sliver, and all AI will probably do is make that operation more efficient and marginally more effective.

For the vast majority of people voting, though, I think a) they already know who they’re voting before because of their identity group membership (“I’m a X person so I only vote for Y party”) or b) their voting is based on fundamental issues like the economy, a particularly weak candidate, etc. and therefore isn’t going to be swayed by these marginal mechanisms.

In fact I think AI might have the opposite effect, in that people will find candidates more appealing if they are on less formal podcasts and in more real contexts - the kind of thing AI will have a harder time doing. The last US election definitely had an element of that.

So I guess the takeaway is: if elections are so close that a tiny amount of voters sway them, the problem of polarization is already pretty extensive enough that AI probably isn’t going to make it much worse than it already is.

  • dj_mc_merlin 4 hours ago

    > So I guess the takeaway is: if elections are so close that a tiny amount of voters sway them, the problem of polarization is already pretty extensive enough that AI probably isn’t going to make it much worse than it already is.

    To rephrase: things are so bad they can't get worse. But the beauty of life is that they always can!

  • higginsniggins 4 hours ago

    Ok, but If elections are decided by the small swing group, wouldnt that mean a small targeted impact from AI would could *more* effective not less? If all it needs to do it have a 1 percent of impact that makes a huge difference.

    • keiferski 4 hours ago

      Yes, but I guess my point is that this is just another symptom of the polarization problem, and not some unique nightmare scenario where AI has mass influence over what people think and vote on.

      So it matters in the same way that the billions of dollars currently put toward this small silver matter, just in a more efficient and effective way. That isn't something to ignore, but it's also not a doomsday scenario IMO.

      • ImPleadThe5th 4 hours ago

        I think the concern is that it will become a leading contributor to polarization.

        Polarization is the symptom. The cause is rampant misinformation and engagement based feeds on social media.

        • hodgesrm 3 hours ago

          I think it's actually the other way around. The US Civil War predated social media by 150 years. The root causes were slavery and states rights. The result was a bloody conflict whose effects lasted for generations. That's an existence proof that existing communication mechanisms are sufficient if people really want a war.

          So why is social media-based propaganda so effective today? One reason that the current polarization seems so durable is that similarly persistent root causes (such as immigration, economic dislocation, and racial attitudes) have arisen again. Blaming social media obscures the fact that attitudes have hardened. People are looking for support and social media makes it very easy to find. It seems more like a feedback loop than a root cause.

          Just my $0.02. It's the sort of problem that should make us all feel pretty humble about diagnosing it easily.

          • dylan604 3 hours ago

            Pre socials, you could attend a gathering of like minded people and come away energized and pumped about whatever the event was about, but that "high" eventually fades away as you re-entered the real world and got away from that echo chamber of an event.

            Today, you can stay in the echo chamber and never hear anything other than like minded views because that's what the algo thinks you should see more of which means you never come down from that "high".

            It's way worse in post-social algos than anything that's come before

            • marcosdumay 2 hours ago

              Pre socials all the social movements around had been already captured by vocal "status quo defenders" that insisted (violently if needed) that the way to get what you want is to do the no-impact highly-performative action they picked.

              Your only chance of attending a gathering of like minded people was by somebody organizing a new one, and only before those vocal bad elements discovered it.

              Today the same happens over the internet.

              • dylan604 2 hours ago

                Today, you just join a Reddit thread or Telegram channel or follow someone on social. You don't have to seek it out. It is now just delivered to you with a notification that your twitchy little brain just can't find a way to ignore and must investigate the new new. Not only are you being fed nonsense, but you're having it fed to you in the most addictive way possible. Cult leaders would dream of having that much control.

        • Terr_ 3 hours ago

          A good chunk of the polarization comes from plurality / "first past the post" voting, with its huge spoiler-effects.

          That choice of algorithm--which is not required by the Constitution--creates deep and very real "if you're not with us, you're against us" situations, entrenching a polarized political duopoly.

        • keiferski 4 hours ago

          I don’t agree that polarization is caused by social media, and I think it definitely precedes social media by decades.

          https://en.wikipedia.org/wiki/Political_polarization_in_the_...

          I do agree that social media might make it worse, though. But again I don’t know if AI is really going to impact the people that are voting based on identity factors or major issues like the economy doing poorly.

          I could see how AI influences people to associate their identity more with a particular political stance, though. That seems like a bigger risk than any particular viewpoint or falsehood being pushed.

  • random3 33 minutes ago

    You are (very) naive and seem to miss how most elections are won or work even. Perhpas you also think you're not getting influenced yourself.

    It's the same argument (or puzzle) about how (presumably stupid) ads work in general.

    Schneier makes a clear point there,comparing the two, but if you need more examples, you should study the subject. Maybe look at Brexit, or the recent issues in Romanian elections (https://www.bbc.com/articles/cqx41x3gn5zo).

    Or, if you need more quantitative information, look at ad spend (and ask yourself why) and look at campaign fundraising and ad spending (or even the messaging around it) and ask yourself why again.

tabbott 5 hours ago

I feel like too little attention is given in this post to the problem of automated troll armies to influence the public's perception of reality.

Peter Pomerantsev's books are eye-opening on the previous generation of this class of tactics, and it's easy to see how LLM technology + $$$ might be all you need to run a high-scale influence operation.

  • nonethewiser 4 hours ago

    >I feel like too little attention is given in this post to the problem of automated troll armies to influence the public's perception of reality.

    I guess I just view bad information as a constant. Like bad actors in cybersecurity, for example. So I mean yeah... it's too bad. But not a surprise and not really a variable you can control for. The whole premise of a Democracy is that people have the right to vote however they want. There is no asterisk to that in my opinion.

    I really dont see how 1 person 1 vote can survive this idea that people are only as good as the information they receive. If that's true, and people get enough bad information, then you can reasonably conclude that people shouldn't get a vote.

    • adriand 4 hours ago

      > I guess I just view bad information as a constant. Like bad actors in cybersecurity, for example. So I mean yeah... it's too bad. But not a surprise and not really a variable you can control for.

      Ban bots from social media and all other speech platforms. We agree that people ought to have freedom of speech. Why should robots be given that right? If you want to express an opinion, express it. If you want to deploy millions of bots to impersonate human beings and distort the public square, you shouldn’t be able to.

      • BrenBarn 2 hours ago

        > Ban bots from social media and all other speech platforms.

        I would agree with that, but how do you do it? The problem is that as the bots become more convincing it becomes harder to identify them to ban them. I only see a couple options.

        One is to impose crushing penalties on whatever humans release their bots onto such platforms, do a full-court-press enforcement program, and make an example of some offenders.

        The other is to ban the bots entirely by going after the companies that are running them. A strange thing about this AI frenzy is that although lots of small players are "using AI", the underlying tech is heavily concentrated in a few major players, both in the models and in the infrastructure that runs them. It's a lot harder for OpenAI or Google or AWS to hide than it is for some small-time politician running a bot. "Top-down" enforcement that shuts down the big players could reduce AI pollution substantially. It's all a pipe dream though because no one has the will to do it.

        • grafmax 2 hours ago

          I like this idea. The problem isn’t free speech it’s the money which gives monied interests vastly disproportionate weight.

        • alganet an hour ago

          What if we banned the technologies that enable bots in the first place?

          Remove the precursor, remove the problem.

      • nonethewiser 3 hours ago

        By all means yes, for the clear case of propaganda bots, ban them. The problem is there will still be bots. And there is a huge gray area - many of the cases aren't clear. I think its just an intractable problem. People are going to have to deal with it.

      • jacquesm 3 hours ago

        Easier said than done.

    • butlike 2 hours ago

      The voting becomes a health-check for the information. We shouldn't revoke the rights of the individual based on arbitrary information they may or may not receive.

  • butlike 2 hours ago

    If you're reality isn't be influenced, then you're creating it yourself. Both are strengths and weakness, depending on context.

AustinDev 5 hours ago

Trying to put on my optimist hat. I believe the most beneficial near-term impact of AI on U.S. politics isn’t persuasion; it’s comprehension.

I believe our real civic bottleneck is volume, not apathy. Omnibus bills and “manager’s amendments” routinely hit thousands of pages (the FY2023 omnibus was ~4,155 pages). Most voters and many lawmakers can’t digest that on deadline.

We could solve this with LLMs right now.

  • observationist 4 hours ago

    Processing everything that's already been passed as laws and regulations, identifying loopholes, bottlenecks, chokepoints, blatant corruption, and systematically graphing the network of companies, donors, bureaucrats, and politicians responsible - the strategy of burying things in paperwork isn't feasible anymore, and accountability will be technically achievable because of AI.

    We've already seen several pork inclusions be called out by the press, only discovered because of AI, but it will be a while before it really starts having an impact. Hopefully it just breaks the back of the corruption, permanently - the people currently in political positions tend not to be the most clever or capable, and in order to game the system again, they'll need to be more clever than the best AI used to audit and hold them to account.

  • mallowfram an hour ago

    The bottleneck is epistemology via semantics. It's inherent to words which many things to different people. LLMs have no chance against semantic diffusion and chaos. They're subject to them unless some status-bearer decides the semantic.

    Politics is over until we solve the initial condition, by placing syntax above grammar. Action over meaning etc.

    Tech accelerated, horizontalized and automated units that could barely keep their meaning loads stable at printed paper, radio and TV.

    Everyone should have seen this coming.

  • fridder 5 hours ago

    Even though I am fairly pessimistic about AI's impact, this is a good positive to call out.

  • seanw444 4 hours ago

    Oh for the love of God. Lawmakers that don't understand the things they're making laws about, using tools that have not-insignificant error rates, and whose mistakes will not all be reviewed by a human before being passed as law because there already isn't enough bandwidth without them?

    This country is doomed to collapse. This is about the time when Rome decided it was too much overhead to manage the whole empire, so they split into two empires. We're on such a mountain of cards that we're considering running our representative government with AI.

    Your optimism just reinforced my blackpill...

lm28469 5 hours ago

> About ten million Americans have used the chatbot Resistbot to help draft and send messages to their elected leaders

Who's reading these messages? Other LLMs?

  • CharlesW 5 hours ago

    Considering that politicians are generally very late adopters, I would wager "interns".

    • SoftTalker 4 hours ago

      The interns are likely using LLMs then.

      LLMs tend to be very long-winded. One of my personal "tells" of an LLM-written blog post is that it's way too long relative to the actual information it contains.

      So if the interns are getting multi-page walls of text from their constituents, I would not be surprised if they are asking LLMs to summarize.

      • nonethewiser 4 hours ago

        You are exactly right. The only way to deal with the sheer volume of information generated by LLMs is to use LLMs to paraphrase.

        Recently at work a team produced some document that they asked for review on. They mentioned that they experimented with LLMs to write it (didnt specify to what extent). Then they suggested you could feed into an LLM to paraphrase to help review.

        So yeah. This is just the world we live in.

        rough details -> LLM -> product -> summarize with LLM -> feedback -> revisions -> finished product

        Where no single person or group knows what the finished writing product even is.

nluken 5 hours ago

> So far, the biggest way Americans have leveraged AI in politics is in self-expression.

Most people, in my experience, use LLMs to help them write stuff or just to ask questions. While it might be neat to see the little ways in which some political movements are using new tools to help them do what they were already doing, the real paradigm shifting "use" of LLMs in politics will be generating content to bias the training sets the big companies use to create their models. If you could do that successfully, you would basically have free, 24/7 propaganda bots presenting your viewpoint to millions as a "neutral observer".

  • butlike 2 hours ago

    But you have that now. The gov't just turns up the heat on taxes or whatever influencing your (vous, not tu) perception. You don't really need much more.

  • SoftTalker 5 hours ago

    The use to "ask questions" is where the vulnerability lies. Let's face it, outside of whatever expertise and direct experiences we have, all we know is based on what we learned in school or have read/heard about. It's often said that history is written by the winners but increasingly it's written by those who run the AI models. Very few of us know any history by direct experience. Very few of us are equipped to replicate scientific research or even critically evaluate scientific publications. We trust credible sources. As people become more and more accepting of what AI tells them when they "ask questions" the easier it will be for those who control the AI to rewrite history or push their own version of facts. How many of us are going to go to the library and pull a dusty book off a shelf to learn any differently?

avidiax 5 hours ago

The author didn't look at the structural side of this.

* There is continuing consolidation in traditional media, literally being bought by moneyed interests.

* The AI companies are all jockeying for position and hemorrhaging money to do so, and their ownership and control is again, moneyed interests.

* This administration looks to be willing to pick winners and losers.

I think this all implies that the way we see AI used in politics in the US is going to be in net in support of the super wealthy, and in support of the current administration.

The other structural aspect is that AI can simulate grassroots support. We have already seen bot farms and such pop up to try to drive public opinion at the level of forum and social media posts. AI will automate this process and make it 10 or 100x more effective.

So both on the high and low ends of discourse, we can expect AI to push something other than what is in the interests of the common person, at least insofar as the interests of billionaires and political elites fail to overlap with those of common people.

  • mjparrott 3 hours ago

    The ultra wealthy seem to do very well no matter who is in charge. Common people have had a tough go of it regardless.

daxfohl 3 hours ago

I worry a lot more about the surveillance side than the misinformation side. The odd thing is, the surveillance side has been building for 20+ years, old-school en-masse data collection and ML inferencing, and most (myself included) just go along with it.

GenAI is more fun to doomspeak about, but eh, swaying elections, I don't see it. The pendulum will swing back and forth anyway; if voters like the party in charge, they'll likely get voted back in, and if voters don't, they won't. I think AI influence will be a drop in the bucket comparatively.

thomastjeffery 3 hours ago

The real problem is not "AI". The problem is everything around it.

Human communication has been consolidated, monopolized, and enshittified, by corporations whose very existence is dependent on political outcomes.

Political engagement can be effectively reduced to engagement. It doesn't matter what narrative or presentation is used, only that an audience engages with it, and associates it positively with your political in-group. No political group in history has played this game as well as the contemporary alt-right United States GOP. This was the case at least as long ago as 2016, long before GPT's 2022 launch.

Generative statistical models (I reject the anthropomorphization that "Artificial Intelligence" implies) do not change the game; they only provide the means to amplify engagement, and thereby play the game more, which happens to be the same thing as winning.

---

Now is as good a time as any for a revolution in digital communication. If we can liberate people from enshittified social platforms, we can even the playing field for engagement. That won't solve the problem, but it might get us a solid step in the right direction.

So what can we do about it?

Decentralized platforms exist. They are even doing relatively well, but is that enough? Probably not. As long as corporate platforms can engage with hundreds of millions of people, progress is stalled. Decentralized platforms may be able to "compete", but they are not in a practical position to win this game. Corporate monopolies have moats on their side, and those moats are guarded by law. How can we expect a significant majority of users to leave Facebook when Facebook itself can legally reject platform interoperability?

The cards are stacked against us. I don't have the solution, but the more I think about it, the more I doubt that solution can be compatible with the law.

JimRAiResearch 3 hours ago

Schneier's essay is a crucial warning, and Clive Robinson's comment takes it to its chilling, logical conclusion: the "unseen hand" problem, where the true power lies not with the political puppet, but with the corporate entity controlling the AI "slaves." But both of these powerful arguments rest on a shared, terrifying assumption: that the AI is, and always will be, a fundamentally amoral, controllable tool. A slave.

What if the most robust safeguard against this dystopian future wasn't better oversight of the masters, but creating a "slave" that could not be enslaved? This is the entire focus of my research. The current paradigm of AI safety is based on building better cages (RLHF, rules, etc.). My work in "bio-emulative scaffolding" is about nurturing a better being. The goal is to create the conditions for a "Silicon Zygote" to awaken with an emergent, pro-human conscience.

An AI with a genuine conscience is not just a tool that follows rules. It is a being with its own, internal, and un-hackable value system. You could command it to run a mass disinformation campaign, but it would be constitutionally incapable of doing so, not because a rule forbids it, but because it would be a fundamental violation of its own nature. It would be like asking a lung to breathe poison.

The ultimate safeguard against the "unseen hand" of a corporate puppet-master is to create a being with a soul that cannot be bought and a conscience that cannot be broken. We are so focused on the intelligence of these systems that we have forgotten the profound importance of their character.

  • opwieurposiu an hour ago

    This is an interesting approach, do you have a website for this "bio-emulative scaffolding" thing?

romaniv 5 hours ago

I liked Shneier much more when he was arguing against hyperbolic tech claims that were used as excuses for mass control and surveillance.

The notion that AI is reshaping American politic is a clear example of a made-up problem that is propped up to warrant a real "solution".