Conceptual & action-research framework for hybrid intelligence

“As technology is empowering our choices and we are getting something like the power of gods, you have to have something like the love and the wisdom of gods to wield that or you self-destruct.”
—Daniel Schmachtenberger

Name of Project:
Wisdom-Guided Collective Hybrid Intelligence (CHI) in a Network of Human & AI Agents

Proposal in one sentence:
Creating a conceptual and action-research framework, complete with sample use cases, methodology, and a roadmap for generating a new form of intelligence—that I call “HybridInt”—in a network of human & AI agents.

Description of the project and the problem/opportunity it is meeting:
Between techno-utopians and techno-doomsayers, the emancipatory potential of AI is lost. To realize AI’s full potential for the common good, we must cultivate hybrid intelligence. It is an emergent quality generated not by the simple addition of AI to human intelligence but by the synergy of their strengths. On the AI side, they include decentralized, explainable (XAI), and participatory (PAI). On the human side, they include the human capacity for activating collective intelligence and wisdom (that cannot be mimicked by algorithms).

My starting point is that HybridInt, is the intelligence of a post-human lifeform, also referred to as “planetary sapience” that can compensate for the limitations of both human and artificial intelligence.

Research hypotheses:

  1. HybridInt, emerging from the synergy of human and AI agents, can demonstrate higher cognitive powers (e.g., concept mapping and visualization, hypothesis development, imagination, learning abilities, and memory) than either category of agents would by itself.

  2. HybridInt has the potential to meet some of the challenges generated by previous stages in socio-economic and meaning-making structures if both the human and AI agents are given the opportunity to co-evolve their skills and knowledge.

  3. The application of HybridInt to real-world challenges can fulfill its potential only if it’s guided by prosocial ethical values and wisdom. (When it comes to the actual research, beyond the concept paper writing, I’d like to verify this hypothesis in collaboration with Algovera, by building and co-facilitating a squad composed of human and AI agents on Web3 infrastructure with a regenerative goal.

The research that I’ll describe in the proposed concept paper will be conducted by using my Generative Action Research (GAR) methodology combined with Participatory AI. GAR will also be applied to the paper’s development and represent its first cycle. The roadmap will include the first milestones of applying the framework to potential use cases, e.g. the Kernel community, enhancing wisdom in Web3 organizations & the epistemic commons of the meta-crisis.

Grant deliverables:

  1. A concept paper complete with:
  • Literature review, with an eye on finding evidence that supports or refutes the research hypothesis)
  • Explanation of key distinctions used in the paper
  • A first draft of an integrated set of research questions about developing the capabilities ontology, collective epistemology. and wisdom-oriented ethics of HybridInt, for an academic paper (looking for co-authors)
  • List of experts to survey
  • Draft of the research funding strategy
  • High-level design of some potential use cases waiting for confirmation by the concerned parties: the Kernel community, the inner game of AI (+ first outline of a book about then same), and eventually, the Radar DAO, the River DAO, and a tool for accessing AI-discovered patterns that connect emergent meaning in the work of Daniel Schmachtenberger and the liminal Web.
  1. Short, screen-recorded video of the co-explorations of “existential risk” and “ubuntu” with my unpaid AI research assistants, as an experimental demo of emergent HybridInt

Spread the love
I’d welcome colleagues who are adept at no-code prompt engineering and can help develop a compelling narrative by translating the research questions into prompt chaining and fine-tuning, using ChatGPT/HuggingFace/ChatSonic. Also, fellow users of Loom and/or Obsidian, who can help the project make better use of those tools.

Squad lead:
George Pór

Twitter handle: @technoshaman
Discord handle: Technoshaman#5921


In the proposal above, I wanted to link the phrases “epistemic commons” and meta-crisis to some explanation but Discourse allows news users to include only two links in a post. So, here they are:

The epistemic commons of the meta-crisis network-graphed here.

Please do not hesitate to ask questions and give me your feedback/feedforward. This is a pretty ambitious undertaking and it could benefit from more of us adding their voices to it.

I did some experiments in hybrid intelligence by interviewing ChatGPT about it here.

This kind of conversations with an AI is a prep for the research that will dive into HybridInt. Unlike ordinary hybrid intelligence, HybridInt is thriving on not just adding AI and human intelligence but on the sympoiesis of the two.

Before sympoiesis can come alive, we have to marry the inner game of AI research (recorded in the researcher’s prompt-structured learning journal) with the “inner game” of AI itself, captured (e.g.) by how it is meeting the explainability requirements.

Another relevant descriptor is what I wrote in my blog on Forms of Collective Intelligence in 2006.

Human-machine CI This form of CI leverages the synergy of the human mind and its electronic extensions, drawing on the best capacities of both. The “collective” includes symbiotic networks of humans and computers working together and developing compound capabilities. It can also support all other forms of CI:

@Mark asked,

what are the intentional assumptions you are making and how do they orient the ethics of your project?

Here are some that I am aware of:

  1. Hybrid intelligence and its components can have atrocious, sociopathic outputs unless it’s guided by wisdom described as follows:

People, communities, and organizations are wiser when their views and decisions serve the well-being and well-becoming of all by taking into account more of the interdependent contexts and long-term consequences of their actions. Even the wisest individuals can have only a very limited capacity to account for transcontextual variables in the consequences, compared with the potential of a wisdom-inspired collective entity.

“So the wisdom we’re seeking is necessarily co-created by diverse people and firmly grounded in an expanded sense of reality – especially embracing the aliveness of human and natural systems and interactions.” —Tom Atlee

  1. Technologies, including AI, can have destructive and emancipatory potential.

  2. To fulfill its emancipatory potential, hybrid intelligence needs to be generated not by the simple addition of human intelligence and AI but by the sympoiesis of the two.

I’ll respond with the intention of providing “food for thought”:

  1. Wisdom that assumes complex systems can be predicted and controlled is itself problematic. I’m unsure how collective wisdom is operationalized, do you have examples? I think your assumption would conflict with wisdom defined as “using knowledge effectively”. It seems you consider wisdom is about achieving a particular moral outcome effectively? I imagine that the WEF believes it is aligned with “wisdom”. In general I agree that we should aim to be increasingly sensitive of the context but doesn’t this lead to accepting we aren’t wise?

  2. I’m unsure if this implies, for you, that technology is amoral. I would argue for AI as a “technology of ethics”. The differential development of AI is something I’ve seen mentioned in Cooperative AI (but I think they still assume AI needs to ultimately have agency)

  3. This seems to assume that AI is different from human intelligence. This is certainly the case today but isn’t the dominant research intention to subsume human ability with AI? What is unique that humans will bring to this synergy?

Aren’t you assuming a particular philosophical framework for the research?

I may or may not. It’s more likely that our philosophical frameworks are different, which, when both are respected, we can take for potential source for enriching our community’s collective wisdom, no?

For clarification’s sake, I neither said nor would i ever say that complex systems can be predicted and controlled. (I’ve been a student of complexity sciences for 30 years.) However, I did mention something that could be construed to that effect, and I appreciate that you caught it.

People, communities, and organizations are wiser when their views and decisions serve the well-being and well-becoming of all by taking into account more of the interdependent contexts and long-term consequences of their actions.

Some consequences are easier to foresee than others. Foresight is not the same as prediction, let alone control. To further clarify my take on this, let me quote one of my fave futurists, James Burke:

Foresight starts from a place of humility—we cannot predict the future—and an acceptance of ambiguity.

We can, however, look for clues about the future and create different pictures and stories of ways the future might unfold. Foresight is a discipline that collects and scans trends and conditions to forecast, not predict, possible futures.

The diagram below offers a simple phased approach to foresight, starting with a search for clues (Sense), pulling together the results and looking for the ways the dots connect (Analyze), using those insights to plan ways to live into that future (Shape), and repeating the process. The first touchpoint with complexity is recognition that while this process is sequential, it is not necessarily linear, but instead is networked.

Burke’s Sense - Analyze - Shape process of applying foresight to a complex landscape is isomorphic with my three vectors of sovereignty that precede wisdom Sense-making - Meaning-making - Choice-making). Thank you @Mark for letting me see this consequential correlation that I’ve discovered just now!

Unfortunately, Discourse doesn’t let me include a link to my unpublished essay on Enhancing Wisdom in Web3 Organizations where I talk wrote about how sovereignty precedes wisdom, which is on my, but if you or anybody else wants to read it, I will stick a link to it in the Discord channel of my proposal.

If you don’t know whether you are using a particular philosophical framework and you are engaged in a philosophical project then I think you need to investigate this.

My question was not intended to raise a comparison. Although I do believe every philosophical framework has its assumptions/beliefs. The question was about your framing, it is not proposing another framing.

Exposing what you are doing to criticism is not easy. What seems to have happened is you have retracted from that and adopted something closer to - you have your opinions and I have mine. While that is certainly neighbourly behavior I do not see how it will help advance your project.

I’ve seen similar discussions about the difference between forecasting and prediction and I think they are using a bizarre definition of prediction. Prediction obviously does not mean having a crystal ball and knowing the answer. For example if I predict the outcome of rolling a weighted dice this does not imply I know the result. Basically I see forecasting as a subset of prediction - where prediction is used in a particular way (e.g. I can make predictions about the past but that is not the objective in foresight). I guess you are familiar with Futures Studies but I mention it just in case.

A bit more grist for the mill of clarifying the nature of our (eventual) differences:

In my view, it doesn’t conflict. It’s more like the first transcends and includes the second.

Nope. Wisdom is not goal-oriented, it’s not about achieving a particular outcome, moral or otherwise.

Wisdom is an always-emergent quality of our relationship with a widening window on reality and a deepening meaning of what we see through it, individually and collectively.

(I’m grateful to the quality of our exchange that inspires this kind of first-articulations.)

For more background,
here are the links to two pieces of writing by Tom Atlee, a long-time colleague of mine in the collective intelligence and wisdom field:

Some Ways We Can Be Wise

A Q&A on Wisdom

I am very suspicous of wisdom that does not translate into tangible material outcomes. While I believe wisdom is applicable to immaterial (in the sense of scientifically unmeasurable) domains I would not want it to be limited to that. The living conditions of those not in positions of power seems like one good indicator about the “wisdom” of a culture/society.

By disconnecting wisdom from outcomes I think we set ourselves up for the sort of “metaversse” of propaganda and alternative-facts that we are being submerged by. For example, one “wise” person can replace science in the case of medicine. Or we can claim to be the leading democracy without ever measuring this.

Maybe it would be clearer to say that while your personal approach to wisdom may include concern for objective outcomes, if the definition of wisdom you are using allows for wisdom that is not concerned with this then it seems problematic. For example, it seems that wisdom might include respecting traditions and in this regard someone with a much wider window on reality could act in ways that are far less wise.

Absolutely! We are very far from a wise culture/society, that we’d both love to see manifest. A deliberately developmental society would be a step towards it. Moving in that direction is what motivates me when I seek to get a micro-grant for writing the concept paper of the Developing Wisdom-Guided Hybrid Intelligence in a Network of Human & AI Agents research. If I get the grant, I will take into account all contributions from our conversations.

Depends on which traditions we talk about and how they are being used, supporting or hindering the next evolutionary wave of consciousness, to use a term that computes even less than wisdom… :grinning_face_with_smiling_eyes: