Creating a Meshwork of Communities of Practice for Unleashing the Emancipatory Potential of AI-Enhanced Collective Intelligence
An expanded version of the presentation at the 9th International Conference on Innovation and Knowledge Management in Asia Pacific (IKMAP2018), in Hangzhou, China, Nov 1, 2018
George Pór, Meridian University
How can the astounding powers of Artificial Intelligence (AI) be put in service of galvanizing the even more awesome powers of the Collective Intelligence (CI) of our communities, organizations, and the human species, for the sake of the greater good? That’s the question I begin to address with this paper and with the present stage of my life as a CI researcher (since the mid-80s).
Many of the current explorations about the dangers of an unfriendly Artificial General Intelligence (AGI) or the promises of a human-friendly version of it is largely speculative because AGI requires achieving human-level intelligence and sentience which is as questionable as faster-than-light travel. In the media and the public discourse about AI, there’s frequently a blurring of the difference between general AI and narrow AI . GeneralAI is supposed to be able to perform any intellectual task that a human being can, from which we are decades away. Narrow AI can solve specific problems on a well-defined narrow domain. In this paper, “AI” refers to “narrow AI.”
This paper will outline ways to actualize the possibility of:
- Augmenting the self-reflective, collective intelligence of humans and their communities with AI.
- Creating a conceptual framework for Generative Action Research to address the challenges of individual and collective intelligence augmentation (IA) with AI for the common good.
- Facilitating collaboration among civic-minded AI practitioners and academics in Asia and the West, who are attracted to the first two challenges.
- Convening an international meshworking conference (of leading AI researchers, entrepreneurs, NGOs, policy-makers, technology executives, and philanthropists) to identify priorities for both near-term and long-term development and jointly articulate recommendations for national/international policy-making bodies.
- Using the meshworking conference for forming human-facilitated and AI-enabled communities of practice to realize the emancipatory potential of AI for prosperity for all
Collective intelligence is defined in this paper, as an emergent quality of social groups (of any size), which enables them to evolve towards higher-order harmony and complexity, through networks of interacting individual capacities and such innovation mechanisms as differentiation and integration. Of course, that is only one of the many deﬁnitions of CI . It is seen through the ‘evolutionary’ lens and differs from the ‘wisdom of crowd’-type CI… The emphasis on the emergent quality distinguishes it from ‘additive CI’ that merely states, ‘two minds are better than one’.” (Pór, 2014)
Communities of practice are self-organizing and self-governing groups of people who share a passion for the common domain of what they do and strive to become better practitioners. They create value for their members and stakeholders through developing and spreading new knowledge, productive capabilities, and fostering innovation.
Meshworks are multi-stakeholder social spaces for structured collaboration across various sectors, communities of practice, and other actors, to achieve a common purpose.
“The coming AI revolution will bring about either the best of times or the worst of times.”
— Kai-Fu Lee
From the kingdom of necessity… to the kingdom of freedom
Are you intrigued by the density of these abstracts and the many terms waiting to be unpacked? Let’s unpack them together, starting with the “emancipatory potential of AI.”
Did you know that global GDP would be 14% higher in 2030 as a result of AI – the equivalent of an additional $15.7 trillion? Labour productivity improvements are expected to account for over half of all economic gains from AI… Increased consumer demand resulting from AI-enabled product enhancements will account for the rest. The greatest economic gains from AI will be in China (26% boost to GDP in 2030) and North America (14.5% boost), equivalent to a total of $10.7 trillion and accounting for almost 70% of the global economic impact, according to a recent research by PwC.
Those numbers are huge, yet they are tiny when compared with the true potential resulting, in the next few decades, from the cross-fertilization of AI with the impact of nanotechnology, biotechnology, open data, and other tech trends growing exponentially.
If realized, that potential could lead humanity out of the kingdom of necessity (wage slavery)… into the kingdom of freedom (abundance for all). From an era where most humans have to toil in meaningless jobs to sustain their life… to an era, where work is becoming the joyous expression of the associated free agents’ creativity and aspirations.
“If realized…” but that’s a very big IF. For some, it’s impossible even to imagine it, which is normal given the hundreds of years of work ethics that defined people’s self-worth by how hard they work. The emancipatory potential of AI is opening a door to the best of the times that Kai-Fu Lee and other AI visionaries are dreaming about and working for. Are we going to walk through that door or the one, opening to the worst of the times, in which the world would be run by a handful of mega-corporations, a sliver of managerial and techno-elite prosper beyond measure and the disposable multitudes subsist on basic income?
One of the things we can do for choosing wisely between the paths leading to that dystopian future and the best of the times is influencing the directions in which AI is developing. They are too important for the future of humanity to leave them to be decided by the technologists and the market forces alone. “Our AI future will be created by us, and it will reflect the choices we make and the actions we take. In that process, I hope we will look deep within ourselves and to each other for the values and wisdom that can guide us.” — Kai-Fu Lee
The multi-dimensional issues involved with the choices about our AI future are far too complex for anyone to fully comprehend, let alone influence on their own. Together, with the right design and preparation for a meshworking process, we do have a chance. What do you know about meshworks beyond them being “multi-stakeholder social spaces for structured collaboration across various sectors, communities of practice, and other actors, to achieve a common purpose”?
Meshworks are social activity systems, where the participating actors, organizations, and sectors, co-create their influence on complex societal decisions that matter to them. They do that by creating greater connectivity and coherence in the social field, without centralizing solutions. The higher the diversity of concerned stakeholders participating in the mesh, the better are the chances for a higher-level CI and impact. (It’s worth noting that meshworks also represent a new type of foundation for philanthropy.)
For a case study of large-scale meshworking, read “Developing a roadmap and meshwork for Millennium Development Goal 5.”
What does our better AI future need meshworking for? For example, in no particular order:
- Developing the societal “capability of big-picture thinking coupled with a quick-response intelligence” — Don Beck
- Accomplishing more with less and faster thanks to the synergy emerging from discovering “memories of the future” through collaborative scenario writing
- Piloting Intelligence Augmentation (IA) with AI
- Generating both critical mass and critical connectivity for a systemic impact
How can an initially small group of concerned citizens’ impact scale to the level needed to make a difference and ensure that AI, the science and technology building on humankind’s general intellect, will help usher in “the more beautiful world our hearts know is possible”? — Charles Eisenstein
Scaling up and across
For scaling up a collaborative and growing it into a system of influence, I developed the Generative Action Research (GAR) methodology that I introduced in my keynote at the 2013 annual conference of the International Society for Systems Sciences and outlined in my Issue paper for the workshop on Collective Intelligence for the Common Good. I further elaborated on such qualities of GAR as self-sustaining, self-improving, self-evolving, and self-propagating, in my recent essay.
Beyond its self-propagating aspect, what contributes to scale up its impact are the following characteristics of the GAR methodology. It’s based on a process that is:
- Cyclic — Action and understanding go through cycles of deliberate and spiralling intervention and reflection. Cycle 1 starts with discovering the questions that are the most compelling to the main stakeholders of the research.
- Emergent — The design is not detailed in advance to allow its cycles to respond to relevant knowledge emerging from the previous one. Thus, when specific outcomes cannot be predicted, the process remains flexible and is allowed to develop on its own.
- Participative — Key stakeholders of the project are actively involved in advising the process, reviewing and commenting on its purpose and design.
Scaling for the systemic impact doesn’t just happen by scaling up through self-propagation in expanding cycles of involvement. Another—and potentially more productive—strategy is scaling across: connecting and developing collaboration horizontally with organizations, groups, and initiatives that have similar or complementary intent, then share our discoveries, challenges, and inspirations. Below are the ones that I see, presently, on the horizon of an action research for identifying and realizing the emancipatory potential of (narrow) AI.
- AI Now Institute
- CLAIRE (Confederation of Laboratories for Artificial Intelligence Research in Europe)
- Data and Society Research Institute
- Data Collaboratives
- Global Council on Extended Intelligence
- Partnership on AI
- The Ethics and Governance of Artificial Intelligence Initiative (MIT Media Lab and the Harvard Berkman-Klein Center for Internet and Society)
- The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems
Who should be in the room? — 10 candidate research areas with challenge questions
Where should the first cycle of our Generative Action Research start? Who should be in the room, co-initiating it? To answer that question, we need to first address another issue. We need to define the specific challenges that we ask it to meet. Below is a list of 10 sample areas in which I have a personal and professional interest. I couple the naming of each area worth researching with a challenge question that suggests a practical orientation of the research.
If you want to be invited into the co-initiation process of GAR, let me know which of those issues/questions is of interest to you, too. If you don’t see yours in the list but you believe it should be there, please post a comment about why it should be included and what you’re already doing or plan to do in that area.
- The ABCD of civic technologies — connecting them with Artificial intelligence & Big data in City Democracies
How to prototype use cases that combine the power of Linked Open Data and AI with collaborative civic technologies for strengthening real democracy in the political ecosystemof the current shift from a planet of nations to a planet of cities?
- Serious gaming to grow collective capacity for climate resilience
Research shows that showing people research doesn’t work, but their direct experience may. If so, and if the experience of participating in well-designed serious games can spark “climate change” consciousness and remedying action, then could the combination of game mechanicsand reinforcement learning support “climate resilience” policies and social movements?
- Living with climate change and turning it into a bitter but healing medicine
How could an opportunity seeking AI embodied in deep learning use challenge propagation for helping to increase the fitness of organisations and larger systems in their sociocultural evolution, by learning to adapt to and thrive in the changing climate conditions?
What combination of collective human intelligence and a global sensor network would be useful to a reinforcement learning agent to assign threads to discover dependency relationships between a large number of climate observations and their forcing factors?
- Appreciating the return on collective intelligence
How do the open source movement, next-stage organisations, commons-based peer production, and other new forms of social interaction and coordination spin the distributed (collective) intelligence around the virtuous circle of increasing returns to society?
Can the evaluation of climate policy options by the collective intelligence of expert teams provide useful information to policy-makers about the highest probable social return on the investment in those options, applying a trained Logistic Regression Classifier (as it is already used in the private sector)?
Image source: Designing_for_the_Emergence_of_a_Global-scale_Collective_Intelligence, by George Pór (paper presented at the Global Brain Workshop, Brussels, July 3-5, 2001)
- Galvanizing societal innovation with AI-enhanced soft power
What kind of societal innovation projects involving soft power (could) have big enough data sets to benefit from the power of computer simulation or machine learning, thus enabling the synergy of human and artificial intelligences?
- Electrifying the collective consciousness of networked social movements
How could appropriate combinations of transformative scenario planning, generative scribing, social data mining, collective sensing organs, movement sense-making and other tools and processes for collective self-reflection, be put in service of the networked movements of the multitudes?
- Democratizing AI
Diverse and broad-based participation in AI-enabled CI projects (that call for generating new options) is a condition for their success. What information can be useful for AI to become instrumental in developing tools for facilitating lay people learning to master AI?
How to validate practices worth replicating in AI design for the common good and civic technologies, in direct engagement with the communities of practitioners on the ground?
- Evaluating and increasing synergy in meshworks collaboration
What parameters of collaboration among meshworking partners need to be defined for assessing the applicability of a Foraging Search algorithm to create a synergy index as a multiplier of their collaboration power?
- Birthing a higher-order collective intelligence
What is the role of “inner technologies” in developing that kind of shared mindfulness, which will let us reach a higher-order collective intelligence necessary to guide the development of the right machine learning tools for humankind to navigate safely the turbulent waters of this century?
- Designing conditions for the emergent superorganism’s distributed intelligence
Emergence, by definition, cannot be designed but the conditions favouring emergence can. What can we learn from the functions of intelligence in human beings, which would inform our design for the distributed intelligence of the planetary meta-being?
If some of these challenge questions strike a resonant chord in your heart and mind, let’s explore how we can design a collective intelligence experiment to address them. We could even use the collective intelligence in the global network of the users of OpenIDEO and its design thinking tools to develop, refine, and scale these challenge ideas, and if we’re lucky, even access funding to advance and implement them.
Ultimately, I’d like to co-initiate the convening of an interdisciplinary gathering for augmenting the collective intelligence of the field of collective intelligence itself. In fact, with some colleagues, I tried to organise a Collective Intelligence Convergence already in 1993, then in 2007, still ahead of my time, when neither the field nor my idea was quite ripe for it to happen. (However, the 600-page anthology, Collective Intelligence: Creating a Prosperous World at Peace, grew out from those efforts.)
I learned since then that it’s not my job to boost the CI of the field of CI, in fact not of any one person, but of a network of civic-minded CI researchers and users, AI and data scientists, concerned NGOs, policy-makers, and philanthropists. If you’re one of them, let’s talk.