Putting humanity back into technology.

We are a science company building the foundational infrastructure for emotional intelligence. We believe technology should respond to human experience with empathy, not extraction.

"Technology has scaled our ability to connect, but it left empathy behind. We are building a future where software respects human vulnerability."

Aidyn Kussainov

Aidyn Kussainov

Founder, nx10

Clinical Psychologist & Psychotherapist

The Clinical Mandate

nx10 was not born in a Silicon Valley incubator; it was born in a therapy room. Before founding nx10, Aidyn served as a Clinical Psychologist and Psychotherapist in London, navigating complex emotional states and severe human distress daily.

Sitting across from patients, he witnessed the devastating, compounding impact of the modern digital ecosystem on mental health. Seeing how poor design, manipulative algorithms, and a lack of digital empathy exacerbate human distress, Aidyn left the clinic to build the cure.

The Blueprint

‘Technology has failed the test and is not living well with humanity’.

Technology shapes society (electricity, clean water, medicine, transport). Current society shows that new technology results in extreme inequality. Domestically, vast numbers of people are in debt, not getting married or having children. Internationally, millions are displaced, millions suffer wars. Above all this is a very small group who benefit from technology designed to take ever greater amounts of data generated by humanity to deliver one thing - one standard - one rule. In doing so, this pushes ever more people to accept that. People don't like it. We know they don't like it, and we want to build technology that listens to every human and improves their lives.

Every decision we take should be traceable back to the question: does this help someone live with more clarity, more dignity, more ease inside their own life?

nx10 is a company of people who believe that their actions can make this world a better place for the living. Not in the abstract, but for real people with nervous systems, families, histories and limits. The universe acts in mysterious ways, and its intent, if there is such a thing, may not be for us to infer. What we do have is our common shared humanity. This is our guide.

We do not claim infallibility. Every statement we make, we also state what would make this statement untrue. That is how we keep ourselves honest. A model is only as good as the conditions under which it fails. A cultural claim is only as good as the situations in which we would admit “we did not live up to this.” Mistakes are treated as friends. They show us where we can do better, where our models or our judgments were wrong, and where we need to adjust course.

Brilliance is expected, but it comes second. The ability and desire to work with others comes first. However special we are, in ability or lack of it, we live in the world of other people. No talent gives licence to demean, belittle or harm. Respect for other people’s time, attention and limits is basic, not optional. Working here means accepting that relationships, not individual heroics, carry the work forward.

We look to intent. It is not enough to “not intend to hurt.” We expect an active intent to protect and elevate. That means scanning for foreseeable harm and avoiding it where we can. It means asking how our decisions affect those with less power or voice. It means using competence to make things safer and kinder, not merely to avoid blame.

Our Solution

You cannot fix what you cannot measure.

To build systems that truly support human well-being, software must first be able to perceive it. That is why we are building the Large Feelings Model (LFM). It is the foundational intelligence required to shift technology from an extractive force into a deeply empathetic one.

🛑

Preventing Burnout

By understanding cognitive load and emotional tilt in real time, the LFM empowers applications to proactively intervene, introducing positive friction before a user loses control.

🌱

Sustainable Growth

We align commercial success with human well-being. By optimizing for positive emotional outcomes rather than mindless screen time, platforms generate deeper, more sustainable retention.

🧠

Mental Health Horizons

As the LFM scales, longitudinal kinematic analysis could provide early, life-saving indicators for emotional dysregulation, offering clinical insights years before formal diagnosis.

Explore the Large Feelings Model →

Our Principles

The tenets that govern our work.

Empathy over Extraction

We exist to help systems recognise what is happening in someone’s internal life, and act in ways that reduce unnecessary harm. Every decision we take must be traceable back to the question: does this help someone live with more clarity, dignity, and ease?

Truth to Confidence

Any confidence placed in us must be grounded in what is true. We state clearly what our models can and cannot infer, what error looks like, and where bias can enter. Confidence that rests on anything else is fragile and dishonest.

Power and Restraint

We acknowledge that understanding human emotion is a profound power. We choose to limit what we do, even when something is technically or commercially possible. We stop short of uses that would corrode trust or distort people’s agency.

Open Science

The Open-Source Paradox.

We are a science company. If we are going to define the category of Synthetic Emotional Intelligence, our work must stand up to the blinding light of peer review. We do not hide behind marketing algorithms; we publish.

Like OpenAI's approach to AGI, we believe that open science pushes humanity forward, but open-sourcing dangerous models accelerates harm. Understanding exactly how to emotionally manipulate a human being via software kinematics is a weapon.

Our publishing ethos is strict: We open-source our methodologies, our ethical frameworks, and our aggregated psychological findings. We strictly close-source the model weights and the real-time inference engine. We share the 'how', we protect the possible 'weapon'.

Privacy by Design

Built on clinical ethics. Engineered for privacy.

To build a truly empathetic system, the LFM requires contextual data. But we refuse to compromise on privacy. We have engineered secure, ambient tools that collect this data silently, with absolute respect for user agency.

🙈

100% Content Blind

Our systems operate strictly at the sensor level. We do not require access to the camera or microphone. We do not log words, sentences, passwords, or semantic content. We strictly capture the kinematics - the physics of the interaction.

🕵️

Anonymity by Default

All data collected is anonymised at source. Each device is associated with a randomised identifier that is not passed into our data analytics engine. Individual data points cannot be traced back to a personally identifiable user.

⚖️

Internal Ethics Board

Every new capability added to the LFM must pass through our internal equivalent of an IRB. If a product feature optimizes for extraction rather than empowerment, it dies in the lab. We exist to close the empathy gap.

Join the Mission

We are assembling a team of clinical psychologists, ML engineers, and ethicists to repair the relationship between humans and machines.