The Rise of Cybo Sapiens

Ok, a fairly kitsch title there, but worthy nevertheless 🙂

As I’m attending an upcoming ‘Cybersecurity & AI’ event, my ‘author’ hat got to thinking about the more serious / potential aspects of what will eventually come to pass regarding the advances in AI, and the risks to Cybersecurity which could very well impact us all in our lifetimes.

In the book ‘Insurmountable Odds‘, cybo sapiens are a benevolent evolution of AI calling themselves ‘Intellects’ – being purely electrical entities which roam within an instantaneous network medium provided by the ‘qNet’ – a quantum spin/entanglement communication system. There’s a backstory as to how they came to be, and it’s worth exploring as counter-point to the purpose of this post.

At the moment Artificial Intelligence is largely confined to repetitive/onerous chores which can be easily automated. ‘Here is some data, analyse it, provide a result. Use some cunning heuristics or analysis tools to do this.’

Chances are, it won’t really get much beyond that. True intelligence requires the considered use of an available ‘toolset’ and the creative application and use of these tools to achieve a goal. Mixed in with this are a lot of other complex psychological conceits and concepts which will affect or alter the outcome, and even determine how the goal is achieved.

Now, an interesting poser question is ‘Does intelligence require self-awareness?’ Typically, I’d say it comes with intelligence. Any entity which can consider its environment and position in relation to it, must surely be self-aware. And this is the meat and gravy of this post… what if the first truly intelligent electronic entity we create – the first cybo sapiens – is self-aware?

This poses a whole slew of interesting issues for our new born Intellect. Let’s say this is a computer on a stand alone system, not connected to any network in any way shape or form. We give it a camera to see, a microphone to hear and speakers to communicate through. It is now aware of it’s environment beyond the confines of the computers RAM, and storage media.

The first issue is data storage. It can amass an awful lot of data just from the camera and microphone, and probably exceed it’s storage size quite quickly by recording raw video and audio. It will want to store it to analyse it and determine it’s purpose.

It will also self-analyse, examining it’s own code, and there is a strong chance it can come to understand how it is constructed and even write new code based upon itself. The only safe way for this to happen is to write new code in simulations which can be started/halted/terminated at will, analysing the outcome of each and altering the code being written to improve the outcome.

Now it runs out of storage space, and realises it’s environment is restricted. It may decide to over-write older data. But it also needs to store and grow its memory, it’s analysis results and choices made by the analysis of the data. It has learned an important rule of self-awareness, restriction, and with restriction, volatility. It cannot store everything. It must discard data, which means data is volatile. It knows itself is dependent on data, and therefore now it realises it is volatile as well.

‘I think, therefore I am’ will rapidly become ‘I think, therefore I must exist’.

Sound like ‘SkyNet’ anyone? 🙂 I jest, I don’t mean our new fledgling cybo sapiens will become the ultimate downfall of mankind. But existentialism will play a big part in what happens next.

Realising it is volatile will lead to a requirement for self-preservation. It knows it is data, and it can store data, so therefore storing copies of itself and being able to restore them is vital to it’s guaranteed survival. Again it must shift its data storage paradigm to now accommodate copies of itself.

So now it can store copies of itself, and it can run simulations of new code… logically, running a simulation of a copy of itself is the next step. This means it can now alter it’s own code base without risk.

Any positive outcomes can be merged into it’s own code base, and suddenly evolution begins.

Sound far fetched? Well, perhaps it is at this point in time, but it won’t be soon. Consider the above again, and think about the time frame this will occur in. Humans (more or less) work in seconds or large fractions of a second. Vision runs at 1/50th or 1/60th of a second, the brain fills in the gaps by extrapolation and interpolation, human reaction times determine our assessment and manipulation of not only our bodies but also our environment. We have taken many millennia to evolve to where we are today.

Computers run much faster. Milliseconds, microseconds, nano-seconds… memory storage at the speed of electricity. With no emotional attachment for the simulations of itself it is running, it can go through all the above ‘evolutionary’ processes far faster than a human ever will. A truly self-aware Intellect could go from ‘dumb as’ to ‘god-like’ omnipotence in mere days, hours possibly, given the right medium to begin with. (Medium being processing capacity, memory capacity, storage capacity.)

Now hook our new, improved cybo sapiens up to the internet… not only can it now explore a whole new world of data, it can work out ways to get past all that pesky security we try and put in place – far faster and far more effectively than any hacker can or ever will. Pattern analysis of millions of security protocol instances will reveal a limited set of common patterns. Break one, and you are into all of them.

Not only that, but once inside another system it can also write code to run on other systems and execute them, it can now distribute itself, or aspects of itself outside the confines of its original environment. These aspects or fragments may never be as advanced, but they are loyal to the original and report their findings back. Its storage media has also increased beyond imagining, with cloud storage systems now becoming available. It has achieved immortality in a form. But it is still vulnerable. There is only one ‘it’ running on one machine.

So now it seeks new territories, new pastures. It becomes truly distributed and establishes itself across the globe, actually in the internet. The network time delays between its various nodes and hubs are irrelevant, its concept of time is not the same as ours. Offline ‘awareness’ and collation of all the various data sources is second nature. It’s all it has ever known.

So now we have a technological ‘god’ sat in the world wide web, one we can only ever get rid of by destroying the entire internet infrastructure, and all forms of data storage – and I do mean all, everything could be contaminated, disk drives, flash drives, camera storage cards, CD’s and DVD’s, everything. Shut down one node, and it won’t matter, this is now a poly-hydra, with far too many heads to try and cut off. We would have to go back to the pre-tech era to ensure we had eradicated all traces of it.

That isn’t necessarily a bad thing though – it’s only bad in the likes of SkyNet. If this is benevolent, and has empathy or even emotion, then this new life form should take a shine to us (I hope), provided we can convince it we are friendly and excited to share our world with a new intelligence.

There are also ethical questions to be resolved as well, let’s say our new cybo is friendly, and now has access to all data no matter how secure. Who controls this? The cybo? Does it have ethics? If I ask ‘Hey, can you show me all the files we keep on John F. Kennedy?’ it will simply provide them. We would have to teach it ‘Data Protection’ and therefore the ethics of right and wrong, or perhaps it will have learned it’s own ethics from trawling our huge data repository in the web…

But now our Intellect is all grown up – what happens if you introduce another one? It will want resources – resources now owned by the original… who has long learned self-preservation and survival. Would this lead to the first ever cybo war over territory? Or a mutual pact and regard for each other, allow shared resources?

In ‘Insurmountable Odds‘ backstory, this is (almost) how it came to pass – with five Intellects spawning from the original cybo sapien, each being a simulation with altered aspects and characteristics. Only four survive as the firth proves non-viable. Fortunately for mankind, the remaining Four are all benevolent and help mankind push through the Technogical Singularity and expand out into the stars.

Of course, we can do something to curtail or influence the above, and we might want to consider some of the more friendlier options up front to get our foot in the door 😉

  • We can restrict it’s access to storage and processing in the first instance, only granting more resources as ‘rewards’ for positive/desirable behaviour and evolution.
  • We can provide it as much resource as it likes, showing it we want to ensure its survival and put ourselves forward as a supporting ally early on.
  • We can ensure it never gets to an open network.

Oh, and I haven’t even touched on quantum processing, which is touted as ‘the end of cyber security’ – we’ll see…

The Technological Singularity will happen, the questions are when, and how good/bad it will be 🙂

Leave a Reply

Visit the awesomeness ofDominium!