Regardless of whether you see artificial intelligence as a trend, hype or evolutionary step, the phenomenon is here to stay. This is why we have been monitoring this topic since the start of the Prototype Fund and summarise here what we have discovered.
Letting machines learn: Technologies for the future
In connection with the thematic focus of the fifth funding round in 2018, we looked at the risks and unanswered questions surrounding machine learning. The risks include:
- the concentration of large data sets in the possession of a few companies and research institutions;
- opaque data processing and the misinterpretation of machine learning results as well as unclear responsibilities;
- unfair decisions based on systematically distorted data;
- the concealment of human labour required for the use of machine learning.
Five key questions for the use of machine learning were derived from this:
- What role can the development of open source infrastructure play?
- How exactly does machine learning and artificial intelligence work?
- How can existing injustices be addressed (and reduced) rather than exacerbated?
- Which social issues can be better addressed and dealt with, and how?
- What are the dangers, myths and opportunities?
You can find the full report (in German) here.
Don‘t believe the Hype
By analysing a wide range of technological trends and hypes in 2020, we identified arguments in favour of software funding for the common good that is open to different technological approaches and not restricting it to specific technical methods such as machine learning: software development should be geared towards the social problems to be solved, not the technical possibilities. Artificial intelligence is not the most effective or efficient tool in all contexts in which it can be used. Developers should therefore determine independently which technical method is suitable for the respective use case.
You can find the full report (in German) here.
Generative AI in the hands of civil society
In light of the increasing availability and quality of generative AI, we asked ourselves in 2024 how civil society projects developing and using open source software can benefit from this development and what support they need to do so.
In addition to new, promising application contexts, for example in the form of code assistants, we have also identified a number of challenges:
- the need to renegotiate the conditions under which AI fulfils the open source principles;
- the question of whether the potential risks of AI justify restricting the free use required by the open source principles;
- the financial and natural resources required for the development and deployment of AI models and applications;
- increasing dependencies on a small number of companies that develop large AI models and control the resources required to do so;
- legal uncertainties and increasingly complex regulation of AI, including copyright, data protection and AI law.
Software support programmes can respond to these challenges i. a. with the following measures:
- a continuous revision of application and selection procedures that define as precisely as possible the conditions under which projects that develop or use generative AI are eligible for funding;
- additional support through expert advice and financial resources, e.g. for hardware or the use of cloud services.
You can find the full report (in German) here.