🌰 (nut) | Literature note |

Talk- The Expanding Dark Forest and Generative AI

Metadata

Highlights

  • “very online”. I live on Twitter and write a lot online. I hang out with people who do the
  • 18th-century men of letters.
  • dark forest theory
  • dark forest at night - **a place that appears quiet and lifeless because if you make noise…
  • …the predators will come eat you.**
  • Yancey Striker in 2019 in the article The Dark Forest Theory of the Internet
  • web can often feel lifeless, automated, and devoid of humans.
  • Lots of this content is authored by bots, marketing automation, and growth hackers pumping out generic clickbait with ulterior motives.
  • Low-quality listicles…
  • …productivity rubbish…
  • …insincere templated crap…
  • …growth hacking advice…
  • …banal motivational quotes…
  • dramatic clickbait.
  • overwhelming flood of this low-quality content makes us retreat away from public spaces of the web.
  • lots of unnecessarily antagonistic behaviour, at scale.
  • we risk becoming a target.
  • “main charactered”.
  • “So You’ve Been Publicly Shamed,” Jon Ronson [[So You’ve Been Publicly Shamed - Jon Ronson]]
  • difficult to find people who are being sincere, seeking coherence, and building collective knowledge in public.
  • I’m interested in enabling productive discourse and community building on at least some parts of the web.
  • semi-private spaces like newsletters and personal websites
  • retreat further into gatekept private chat apps like Slack, Discord, and WhatsApp.
  • express our ideas, with things we say taken in good faith and opportunities for real discussions.
  • none of this is indexed or searchable, and we’re hiding collective knowledge in private databases that we don’t own.
  • They’re trained on a huge volume of text scraped primarily from the English-speaking web.
  • Jasper, Copy.ai, Moonbeam
  • more sophisticated methods of prompting language models, such as “prompt chaining” or composition.
  • Ought has been researching this
  • libraries like LangChain
  • Prompt chaining is a way of setting up a language model to mimic a reasoning loop in combination with external tools.
  • It can pick from a set of tools to help solve the problem, such as searching the web, writing and running code, querying a database, using a calculator, hitting an API, connecting to Zapier or IFTTT, etc.
  • “generative agents”.
  • Just over two weeks ago, this paper **“Generative Agents
  • These language-model-powered sims had some key features, such as a long-term memory database they could read and write to, the ability to reflect on their experiences, planning what to do next, and interacting with other sim agents in the game.
  • There’s a new library called AgentGPT
  • It’s now relatively easy to spin up similar agents that can interact with the web.
  • we’re about to drown in a sea of informational garbage.
  • absolutely swamped by masses of mediocre content.
  • We’ll need to find more robust ways to filter our feeds and curate good-quality work.
  • Such as facilitating genuine human connections, pursuing collective sense-making and building knowledge together, and ideally grounding our knowledge of the world in reality.
  • about digital gardening which is essentially having your own personal wiki on the web.
  • make the web a space for collective understanding and knowledge-building,
  • Why does it matter if a generative model made something rather than a human?
  • differences between content generated by models versus content made by humans.
  • First is its connection to reality. Second, the social context they live within. And finally their potential for human relationships.
  • generated content is different because it has a different relationship to reality than us.
  • This is the core of all science, art, and literature. We are trying to understand and teach each other things through writing.
  • In some sense, it’s fully UNHINGED. The model cannot check its claims against reality because it can’t access reality.
  • They’re confused about who they are and where they are, but they’re still super knowledgeable.
  • So simulated humans that can only deal with language are missing a big part of what we perceive as human “reality.”
  • Everything we say is contextual and relies on a shared social world.
  • They know nothing about the cultural context of who they’re talking to.
  • represent a very particular way of seeing the world.
  • “Every way of life represents a communal experiment in living. The world itself is never settled in its structure and composition. It is continually coming into being.”
  • Generating a mass of content from a very particular way of viewing the world funnels us down into a monoculture.
  • When you read someone else’s writing online, it’s an invitation to connect with them.
  • A lot of this talk is based on an essay called The Expanding Dark Forest and Generative AI
  • how we might prove we’re human on a web filled with fairly sophisticated generated content and agents.
  • On the new web, we’re the ones under scrutiny. Everyone is assumed to be a model until they can prove they’re human.
  • This raises both the floor and the ceiling for the quality of writing.
  • They will try to outsource too much cognitive work to the language model and end up replacing their critical thinking and insights with boring, predictable work.
  • they shouldn’t be letting language models literally write words for them. Instead, they’ll strategically use them as part of their process to become even better writers.
  • using them as sounding boards while developing ideas, research helpers, organisers, debate partners, and Socratic questioners.
  • enter a phase of human centipede epistemology.
  • going to use the text generated by these models to train new models. That tenuous link to the real world becomes completely divorced from
  • We will begin to preference offline-first interactions.
  • the only way to confirm humanity is to meet offline over coffee or a drink.
  • Two people who know each of these people can confirm each other’s humanity because of this trust network.
  • create on-chain authenticity checks for human-created content on the web.
  • reasonable to assume we’ll each have a set of personal language models helping us filter and manage information on the web.
  • The product decisions that expand the dark forestness of the web are the problem.
  • if you are working on a tool that enables people to churn out large volumes of text without fact-checking, reflection, and critical thinking. And then publish it to every platform in parallel… please god, stop.
  • First, protect human agency. Second, treat models as reasoning engines, not sources of truth And third, augment cognitive abilities rather than replace them.
  • more ideal form of this is the human and the AI agent are collaborative partners doing things together. These are often called human-in-the-loop systems.
  • locus of agency remains with the human.
  • treat models as tiny reasoning engines, not sources of truth.
  • One alternate approach is to start with our own curated datasets we trust.
  • We can then run many small specialised model tasks over them. We can do things like: * Summarise * Extract structured data * Find contradictions * Compare and contrast * Group by different variables * Stage a debate * Surface causal reasoning chains * Generate research questions
  • These outputs aren’t final, publishable material. They’re just interim artefacts in our thinking and research process.
  • This paper on **“Sparks of Artificial General Intelligence
  • we should be augmenting our cognitive abilities rather than trying to replace them.
    • Note: Good picture too
  • Language models are very good at some things humans are not good at, such as search and discovery, role-playing identities/characters, rapidly organising and synthesising huge amounts of data, and turning fuzzy natural language inputs into structured computational outputs.
  • And humans are good at many things models are bad at, such as checking claims against physical reality, long-term memory and coherence, embodied knowledge, understanding social contexts, and having emotional intelligence.
  • we should think of robots as animals – as a companion species who compliments our skills.
Enjoy this post? Buy me a Coffee at ko-fi.com

Notes mentioning this note

There are no notes linking to this note.