Skip to content

evil robots blog

In 2003, now 20 years ago, I proposed a 3d physical simulation as a new league for RoboCup, the largest annual robotics competition [1]. Marco Koegler and I had developed a prototype with the ODE open source physics engine, and a graph-based flexible scene representation. (Sim)Spark, the new simulator, was released on sourceforge and accepted to establish the new RoboCup 3D Soccer Simulation League. We used the year till 2004 to turn the prototype into something that could be used [2] - initially the robots were just spheres that could "kick" the ball using collisions. A scene description language was added, and eventually we started to use articulated robots - we had a model for the Fujitsu HOAP, and then for the Aldebaran Nao.

Many other people contributed to the version that is still used for the annual competitions now, Markus Rollman and Joschka Boedecker (now Freiburg) were early contributors, and Joschka's worked helped a great deal getting more people involved and getting used to the flexible but somewhat complicated architecture. In 2006 or so, I created a visualisation using Ogre3D but a better one (that is still being used) was created by Justin Stoecker and Ubbo Visser (U Miami) [3]. Here's a YouTube video from the 2023 final, FCPortugal playing magmaOffenburg: https://www.youtube.com/live/j8Qre4XjaEI?feature=share&t=111

Recent work on robot soccer at Deep Mind [4] also made use of a physical soccer simulator, unfortunately doing only a poor job on referencing this (and other) prior work from RoboCup -

It's been amazing to see the evolution and the use of our (Sim)Spark simulator over the past two decades, from its somewhat modest beginnings. From today's perspective, we obviously would approach a few things differently. Maybe also it will be time for a different simulator at some point - better support for machine learning approaches is one thing that comes to my mind, e.g., in form of a fully differentiable simulator.

[1] Simulation League: The Next Generation
Marco Kögler & Oliver Obst
In: Polani, D., Browning, B., Bonarini, A., Yoshida, K. (eds) RoboCup 2003: Robot Soccer World Cup VII. RoboCup 2003. Lecture Notes in Computer Science(), vol 3020.
https://link.springer.com/chapter/10.1007/978-3-540-25940-4_40

[2] Spark – A Generic Simulator for Physical Multi-agent Simulations
Oliver Obst & Markus Rollmann
https://link.springer.com/chapter/10.1007/978-3-540-30082-3_18

[3] RoboViz: Programmable Visualization for Simulated Soccer
Justin Stoecker & Ubbo Visser
https://link.springer.com/chapter/10.1007/978-3-642-32060-6_24

[4] https://www.deepmind.com/publications/from-motor-control-to-team-play-in-simulated-humanoid-football

I’ve been working in AI for more than 20 years now. Nothing is faster than the speed of light, maybe except the speed with which people now become “AI experts” recently.
AI has certainly come a long way, from “AI is if it doesn’t work” to where we are now. But AI and interest in it also has always moved between extremes of hype and disillusionment. In its short history, AI has been predicted to overtake human intelligence multiple times. The current alarmism about AI as an “existential threat” that now also reached Australia is just that - a mix of sensationalist hype, marketing tactics and a result of overinflated egos.

I’m not suggesting there are no issues - there are, and many people have been writing about them though maybe in less alarmist ways. “Existential” they are not, and labelling them as such distracts from actual existential threats.

In Australia the rate of species going extinct is higher than almost everywhere else, how is this for an existential threat? What about the impacts of climate change that Australia is particularly vulnerable to, including future droughts or bushfires, and I’m not sure we have learned enough on how to deal with a next pandemic either. There’s many more things that are likely existential than some (still undefined) existential threat from AI.

There is definitely work to do, and maybe also nice we moved on from hearing “it’ll never work” but it’ll be better working on the issues keeping them in perspective.

I remember when AI was "computer science that doesn't work yet". Now, as artificial intelligence reshapes our world, a widespread use and potential integration of AI language models into commonly used office software signals a new era – feeling similar to only few other dramatic changes in technology, for example like the dawn of the world wide web in the early 1990s, yet with a different quality to it.

With these models increasingly influence in our daily processes, it is crucial to urgently address their implications for innovation, intellectual property, and their effects on the complex systems we operate: markets, organisations, and business and political relationships. The question is whether this development (or revolution, depending on your point of view) will actually unleash new creative potential or inadvertently be an obstacle to human ingenuity that has driven our progress till now.

A major yet overlooked concern is the convergence of ideas and the risk of AI-induced 'groupthink' [1]. An anticipated widespread adoption of language models could lead to the homogenisation of ideas and strategies, reducing creative problem-solving and diversity of thought. This phenomenon can be compared to soldiers marching in lockstep on a bridge, causing its collapse due to the amplified effect of their synchronised movements. Similarly, widespread use of (at least currently) very limited number of AI language models can create situations where everyone moves in lockstep, leading to a decline in creativity and diverse thought. AI has the opportunity to be a catalyst for (human) creativity, but the risk is it may actually do the opposite.

A related issue is the inherent bias present in AI language models [2]. While much criticism has been rightfully directed at the specific biases of such models, their effects on underrepresented groups, or open display of racism, more insidious problems may also arise from the network effect of their widespread adoption. Even if individual biases were mitigated, all models will inevitably retain some bias. The cumulative effect of these subtle biases could lead to severe consequences, as large-scale use of only very few such models may amplify and reinforce them. It is essential to acknowledge and address this issue to ensure that AI serves as a tool for fostering diverse and inclusive innovation.

New and significant risks also emerge in the domain of intellectual property – when companies worldwide run their ideas and strategies through a central language model that may learn from office documents, presentations, and emails. This, quite likely, will lead to confidential information and innovative concepts to unintentionally flow into the AI model, expanding its knowledge base. As a result, company secrets and intellectual property may inadvertently become available to other organisations or individuals using the same model, a threat to competitive advantages and to security of trade secrets.
Critical thinking and diversity are crucial to innovation. Models that represent an average of "everything" that is available digitally, with ethical standards set by a handful of organisations, have only little chance to contribute to either critical thinking or diversity if applied on the large scale.

This final thought leads to a widespread application of such models and its effect on complex systems, resulting in unpredictable consequences and feedback loops. These dynamics can amplify existing biases, create new vulnerabilities, and disrupt the delicate balance in networked systems. The financial industry offers an example that illustrates the potential global dynamics: imagine multiple financial institutions using the same AI language model to develop trading strategies and risk assessments. A sudden, unforeseen market event not detected by AI could lead to all institutions taking similar actions simultaneously, resulting in a massive wave of selling and a potentially catastrophic collapse of financial markets. Many more examples are thinkable.

I am actually (carefully) positive about the opportunities in front of us, but they will require sensitive navigation. We will need to overcome challenges of 'AI groupthink' and information security. This will require a deeper understanding of the role of AI in complex systems.

I can think of several potential remedies. First, promoting the diversification of AI models can help reduce the homogenisation of ideas and foster creative problem-solving. The cost of training and operating such models means that this is also something that can (and should) be supported by national AI strategies.

Second, encouraging collaboration between AI developers, policymakers, businesses, including education on how to avoid inadvertent sharing of information can help create a shared understanding of the risks and opportunities. This will also have to lead into the development of best practices and (updated) regulatory frameworks and strategies (e.g., [3,4]).

Last but not least, the development and sharing of open-source AI models and tools can further promote diversity and innovation in AI development and usage, while also allowing for greater scrutiny and improvement by the wider community. These potential solutions will require work towards mitigating the risks that also come with potential availability of language models, and a close look at trade-offs.

Overall, it is critical to understand the potential risks of a widespread use of AI language models to innovation, intellectual property and the systems that we operate and live in. These challenges need addressing, and also require a continuing discussion about the wider impact of such technology. The discussion will be important for the right balance in our use of AI in daily decision making, and how we maintain our capacity to be innovative and creative.

[1] Jay Dixit. Algorithmic Bias Is Groupthink Gone Digital, 2019.
https://neuroleadership.com/your-brain-at-work/algorithmic-bias-groupthink-gone-digital/

[2] James Arvanitakis, Andrew Francis, Oliver Obst.
Data ethics is more than just what we do with data, it’s also about who’s doing it, 2018.
https://theconversation.com/data-ethics-is-more-than-just-what-we-do-with-data-its-also-about-whos-doing-it-98010

[3] Australia’s Artificial Intelligence Ethics Framework
https://www.industry.gov.au/publications/australias-artificial-intelligence-ethics-framework

[4] Artificial Intelligence Strategy of the German Federal Government, 2020 Update.
https://www.ki-strategie-deutschland.de/files/downloads/Fortschreibung_KI-Strategie_engl.pdf