Import AI 351: How inevitable is AI?; Distributed shoggoths; ISO an Adam replacement #AI - The Entrepreneurial Way with A.I.

Breaking

Sunday, January 14, 2024

Import AI 351: How inevitable is AI?; Distributed shoggoths; ISO an Adam replacement #AI

#A.I.

Import AI publishes first on Substack – subscribe here.

Import (A)Ideas: Control and Inevitability and Our Place In All Of It:
…Some high-level thoughts spurred by reflecting on recent technical progress and broader events in the field…
Like any fast-moving field, AI feels about as confusing inside as it might seem from the outside. And like any fast-moving field, the closer you are to the center of it, the more you feel like you as an individual have agency over it. This sense of agency, as our grandparents know intimately, is 99.9% of the time a hubristic illusion. While it’s of course true that those privileged enough to work in this field have some ability to act – no one is a bystander in a moral or ethical sense – it is difficult to believe any individual is capable of so-called pivotal acts; too many actors and too much utility and too much of too much (more is different).
   The neural net technology, as some people say, ‘just wants to learn’. 
   Put another way: over a long enough period of rising resources flooding into AI, pretty much everything that the technology makes possible will happen. The technology is overdetermined.  

Given that, what may we do? What, then, is agency? What role do we have as both actors and critics, amid all of this sense of inevitability? I’ve mostly come to believe there is individual agency in the following forms:

  • a) Accelerating certain technical capabilities forward in time by dumping resources into them.
  • b) Clearly describing the proverbial trains that you can see coming down the technological tracks. 
  • c) Doing work at the intersection of a) and b) – bringing some technology forward, then describing its contemporary meaning and future implications. 
  • d) Choosing not to participate (though this tends to have the feel of ‘not voting is voting’, to me.) 
  • e) Other things which I have not thought about – email me! 

I’d prefer to be working on a technology of less political import. But here I find myself. In the coming years, the ‘political economy’ aspects of AI seem likely to supersede the technological reality of AI – in much the same way that the narrow innovations in factory management science of taylorism were ultimately overwritten by the politics of ‘mass production’, or how the invention of 100X more efficient ship-to-port transport via containerization was overridden by the politics of globalization.
   What new political forces might AI enable (accelerationism? Hyper-efficient despotism?) and what existing forces might it strengthen (technological incumbents? ‘Network operators’ in the digital sense? Those who want to censor?) and what might it threaten (those who desire a ‘view from nowhere’? Those who rely on hard-to-predict things to make a living? Those who require some large number of humans to do something a synthetic intelligence can now approximate)?
   I don’t have a clear conclusion here – rather, similar to my post about confusion in AI (Import AI #337), I’m publishing this to see how other people might feel, and to register my own confusion and attempt to become less confused.  

*** 

Shoggoth Systems: A “peer-to-peer, anonymous network for publishing and distributing open-source code, Machine Learning models”:
…A sign of the times for how people are thinking about AI centralization versus decentralization…
A good way of thinking about AI policy is that every action has an equal and opposite reaction – if you regulate so that something requires a license to develop, people will work out a way to develop it untraceably. In recent years, there’s been a general push towards greater control over AI systems – see the Biden Executive Order, the in-development European Commision package, China’s rules on generative models, and so on. 
   So it’s not surprising to note the existence of Shoggoth Systems, an organization dedicated to making it easy to develop and distribute machine learning models and other software. “The purpose of Shoggoth is to combat software censorship and empower software developers to create and distribute software, without a centralized hosting service or platform,” the organization writes on its project page. 

Why this matters – centralization versus decentralization: AI is an economically useful technology without immediate grotesque moral hazards (mostly), so of course lots of people want to be able to ‘control the means of AI production’ – despite (or perhaps, because of) the controls that regulators may want to apply to the technology. And who develops Shoggoth Systems, you may wonder? An anonymous account by the name of netrunner, which could be one person or many. Fitting.
   Find out more: Shoggoth Systems (official website).
   Read the Shoggoth documentation here (official website).

***

Ethereum founder thinks AI is more of a risk than you’d assume, and is also worried about centralization:
…Think all crypto people are all-gas-no-breaks libertarians? Think again!…
Vitalik Buterin, a co-founder of the Ethereum crypto-currency, has written a lengthy post about his thoughts on AI and AI safety. I’d had Vitalik type-cast in my brain as being very much of the all-gas libertarian persuasion one tends to find in the crypto movement – and I was wrong! In this thoughtful post, he reasons through some of the inherent challenges of AI, wrestles with its various issues of political economy, and lays out some vision for a path forward. It’s worth reading!

Some of his thoughts about AI: AI should be thought of as “a new type of mind that is rapidly gaining in intelligence, and it stands a serious chance of overtaking humans’ mental faculties and becoming the new apex species on the planet.” Given that, we should definitely take arguments about AI safety seriously and also think carefully about our place as a species: “In a universe that has any degree of competition, the civilizations where humans take a back seat would outperform those where humans stubbornly insist on control.”

So, what should we do: Rather than mindlessly accelerating tech development (e/acc), or pausing/banning development and shifting to a world government (many EAs, lots of extreme safety views), Vitalik thinks we should do things that “create and maintain a more democratic world and tries to avoid centralization as the go-to solution to our problems.” In practice, this looks like building systems for defense/resilience against both physical threats (e.g, better infrastructure for dealing with pandemics or other disruptions), information threats (e.g, disinformation/misinformation, AI-generated bots, etc), and also threats of centralization. 

Centralization vs decentralization: In some (extremely unscientific but probably high-signal) polls on twitter, Vitalik found that people really hate centralization of AI. “In nine out of nine cases, the majority of people would rather see highly advanced AI delayed by a decade outright than be monopolized by a single group, whether it’s a corporation, government or multinational body,” he writes, noting that many people are drawn to the idea of therefore ensuring “there’s lots of people and companies developing lots of AIs, so that none of them grows far more powerful than the other. This way, the theory goes, even as AIs become superintelligent, we can retain a balance of power.”

What to do about the shoggoth: There is a problem inherent to all of this, which is it’s likely that given enough time and cheap enough computer, transformative and potentially dangerous AI might just fall out of someone’s research project on a laptop. We don’t know when it’ll happen, but it feels like a predetermined inevitability. Therefore, we should be preparing for ambitious ways to have deep human-computer cooperation – if AI is gonna appear, we want to be well positioned to communicate with it rapidly as this gives us a world where we have more control. 
   Besides brain-computer interfaces, we may eventually want to upload our own consciousness into the machine, he notes. “If we want a future that is both superintelligent and “human”, one where human beings are not just pets, but actually retain meaningful agency over the world, then it feels like something like this is the most natural option,” he writes. “There are also good arguments why this could be a safer AI alignment path: by involving human feedback at each step of decision-making, we reduce the incentive to offload high-level planning responsibility to the AI itself, and thereby reduce the chance that the AI does something totally unaligned with humanity’s values on its own.”

Why this matters – if we take the future seriously, the future is very serious: While I’m not sure how much probability I assign to some of the weirder things here, the post is worth reading because it ‘assumes we succeed’ at things like building superintelligent systems and then winds the clock forward. It’s clear that under any scenario of success, this means we need to prepare now for a bunch of extremely weird outcomes. “The 21st century may well be the pivotal century for humanity, the century in which our fate for millennia to come gets decided. Do we fall into one of a number of traps from which we cannot escape, or do we find a way toward a future where we retain our freedom and agency?” he writes. 
   Read moreMy techno-optimism (Vitalik’s personal website, blogpost).

***

Is your optimizer actually good? AlgoPerf competition might tell you for sure:
…Finally, there might be a way to work out if there’s a better optimizer than Adam…
Every so often someone comes along with a new system for optimizing the training of neural nets. These papers always include eyebrow-raising claims about the performance of the new optimizer and upon reading it you think to yourself “gosh, I should probably try this out on my own systems”. Then you try it out and you discover that it breaks at some scale and you should be doing what pretty much everyone does – use Adam.
   Now, a group of researchers working via the MLCommons organization, have built AlgoPerf, a benchmark for assessing optimizers like Adam. With AlgoPerf, we might finally have a decent, principled  way to evaluate new optimizers and work out if they’re actually any good. “Our benchmark defines a complete and workable procedure for setting (validation and test error) targets and measuring training time to reach them,” they write. “Our benchmark incentivizes generally useful training algorithms by computing a joint score across all workloads and by including randomized workloads to simulate novel problems”.

Diverse workloads: The team “specify a set of benchmark workloads covering image classification, speech recognition, machine translation, MRI reconstruction, click-through rate prediction, and chemical property prediction tasks”, which the optimizers can get tested against. Along with this, they also create some so-called randomized workloads which introduce “minor modifications to an associated fixed base workload. These modifications include, for example, altering the data augmentation strategies or modifying aspects of the model architecture, such as the activation function or the number of layers”.  They also carefully build strong baselines “by defining search spaces for eight popular optimizers (AdamW, NadamW, Heavy Ball, Nesterov, LAMB, Adafactor, SAM(w. Adam), DISTRIBUTED SHAMPOO”).” 
    The purpose of this combo of diverse tasks and well-tuned baselines is to help researchers – to use a technical term – not bullshit themselves when building new optimizers. “We aim to encourage general-purpose training algorithms that are easy to apply across different data modalities and model architectures,” the researchers write. 

Why this matters: Optimizers like Adam are fundamental to the overall efficiency of training the vast majority of AI systems – so if anyone figures out a reasonable pareto frontier improvement here, the effects compound across the entire AI sector. Competitions like AlgoPerf will give us all a better chance of being able to disentangle signal from noise here. 
   Read more: Announcing the MLCommons AlgoPerf Training Algorithms Benchmark Competition (MLCommons blog).
   Find out more at the project GitHub (MLCommons, Algorithmic Efficiency GitHub).
   Read the research paper: Benchmarking Neural Network Training Algorithms (arXiv).

***

AI hedge fund launches $10m AI math competition:
…$5m for the first prize…
AI hedge fund XTX has launched the Artificial Intelligence Mathematical Olympiad Prize (AI-MO Prize), a prize for AI systems that “can reason mathematically, leading to the creation of a publicly-shared AI model capable of winning a gold medal in the International Mathematical Olympiad (IMO)”.

Competition details: “The grand prize of $5mn will be awarded to the first publicly-shared AI model to enter an AI-MO approved competition and perform at a standard equivalent to a gold medal in the IMO,” the competition authors write.
   The AI-MO prize has three design principles:

  • “AI models must consume problems in the same format as human contestants and must produce human readable solutions that can be graded by an expert panel”.
  • “The grand prize will be awarded for performance in an AI-MO approved competition that is at a standard equivalent to a gold medal in the IMO”
  • “Participants must have adhered to the AI-MO public sharing protocol by the time the prize is awarded.”

Why this matters – the frontier of human knowledge: For those who don’t know, the IMO is basically the world olympics for young math geniuses. Therefore, for an AI system to get a gold medal at it, the AI system will have to perform on-the-fly mathematics at the same level as the frontier of young, brilliant humans. “Despite recent advances, using AI to solve, or at least assist with solving, advanced mathematical problems remains an incredibly complicated and multifaceted challenge,” says Fields Medallist Terence Tao. “The AI-MO Prize promises to provide at least one such set of benchmarks which will help compare different AI problem solving strategies at a technical level”.
   Read more: $10mn AI Mathematical Olympiad Prize Launches (AI-MO Prize website).

***

Tech Tales:

MIL-SIM-FUTURE
[A military base, USA, 2040]

I worked as an engineer in the MAT-P facility – Military AI Training  – Physical. The centerpiece of MAT-P was the procedural battlefield – a marvelous structure which changed itself according to the different military scenarios we wanted to put the droids through. It was made of a multitude of panels which could be re-oriented through several degrees of freedom. Each panel sat on top of hydraulics and there were sub panels in the cracks in the floor between them. 

You could make almost anything you could imagine in the simulator, and then the augmented reality system would fill in the rest – you controlled the physical geography, and then you’d feed through a rendered environment to the droids. They’d fight through city streets or jungles or battlefields and we’d watch them from the observation deck. 

At first, they were slow – intelligent, but slow. And they still made mistakes in the heat of battle. Especially when we changed the terrain on the fly – and in the augmented world, trees would fall, or buildings explode, and so on. But, much like computers themselves, the droids got faster and more competent.

There’s a very narrow band of human competence, we discovered. And th droids went through that in the course of a couple of months. Now, we watch them as they fight their battles and can barely comment on the strategies because they seem alien to us – built around the physical and cognitive affordances of the droids’ alien intelligence. So mostly we place bets and maintain the MAT-P facility and collect our paychecks. 

There’s already talk of the droids designing the next iteration of MAT-P and discussion of whether that could be safe for humans. 

Things that inspired this story: Procedural generation; military test ranges; robots; human labor in a time of increasingly smart machines.



via https://AIupNow.com

Jack Clark, Khareem Sudlow