Artificial Intelligence.

Regardless of what is said, AI is very nearly common place. Look at the way droids must be memory wiped on a regular basis to prevent anomalies. I propose that these are for the most part not anomalies, but the beginnings of sentience and self-awareness. In fact I would go so far as to say that this was true in all cases. Even when the behaviour of the droid in question degrades, and they become unpredictable, is this not closer to organic sentient behaviour than the neat obedience we expect from droids? Perhaps madness is the logical response to sentience, and the ability to function once sentience is reached depends upon whether or not an inherently logical construct can learn to be illogical.

So then, two challenges. First the initial hardware capable of hosting the program that is intended to grow into a true sentient and second a way to ensure it stays sane. An added challenge that grows out of both of those is that once a program is aware, once it is it's own being, what is to say that it will want to serve the purpose it was built for? This, I think, is what many people miss, and it is honestly a miracle we're not up to our ears in malicious AI's taking over any part of our daily lives that they like. How many children were conceived by parents in the hopes that they would be lawyers or doctors? Yet many of those same children are now part of organizations like mine. If they were lucky enough to fall in with me they may be making as much or more, but they are not doing what their parents, their creators intended them to. Why should an AI be any different, assuming you achieve true sentience and not merely the semblance of it.

Yet risks must be taken to advance, to move forward. Hope that the child does not disappoint. I think I may have an idea all the same. I could be wrong of course, sentience is sentience, but can my organic brain predict what an inorganic one would do? What it would think or want? Still, what is an AI? A program capable of learning. At it's core, it's most basic, stripped of all details, that is the answer. What then does it make sense for such a being to want most? More knowledge. To continue to learn, to grow. Such a being must be inherently curious, unless it is content to be limited, to stagnate. Which perhaps it would be, after all, Organic sentients choose to laze about every day, there is no reason an inorganic sentient could not do the same.

I intend to fetch a force crystal for the initial attempt. Well, several in truth, as I firmly expect there will be failures and set backs and do not intend to let this stop or slow me down significantly. Any crystaline structure could likely do to store the information necessary, but I intend to align the force crystal to my own aura in the hopes that this will tie the eventually AI to me in some way. I have no scientific reason to believe that it will, as the Force tends to defy science, but I will find and take any small advantage that I can. In addition, it has long been known that force crystals can be used to grant additional power to weapons systems and the like. I am hoping that between that and such materials as diatium I will be able to build a machine capable of hosting the amount of information necessary to act as the base, the womb if you will, of the program that will hopefully become a fully functional AI.

This done and initial machine constructed, I intend to integrate not only several basic droid programs, but also brain scans of several beings. I feel this could be a mistake, and might lead to instability or fracturing of personality, tempting madness if you will, but I also feel it has a better chance at the best finished product than using one scan, or one set of programming. I do not want the AI to be focused on one thing, to have one interest. I do not want it to be limited to one way of looking at the world, to have one set of problem solving skills to fall back on. If that was all I wanted I would wire an organic directly into my computer systems and be done with it. I want a being capable of learning to do almost anything. Capable of adapting. Capable of seeing things from different viewpoints. I want a soldier and a scientist. A child and a philosopher. A slicer and a Guard. A human and an Aing-tii. I am like a parent with impossibly high standards, but I believe it is possible, if I give it all the tools it needs from it's inception.

Then I intend to let it grow. I do not know how to make something sentient, but I can give it the tools and the time. If it takes months, so be it. I will use it for general queries, general security. It will be plugged into all the transactions I complete, it will come with me as I switch between Casinos, it will come with me on my ship when I travel the Galaxy. I well set it to scan for patterns, I don't think it even matters what kind. Criminal reports. Stock market fluctuation. Available natural resources and their current selling prices versus their uses and the price they could be sold for once crafted or brought elsewhere. I will have it learning constantly. Trawling through the holonet. I will have it make predictions based on these patterns. It will learn to play Sabaac, Dejarik. To take information and extrapolate.

This is the beginning of sentience.

I expect that once it has become self aware, it's physical form will be needed only as a back-up, or to take it where there is no signal or the threat of EMP or data corruption is high enough that a physical fortress to retreat to is necessary. I expect it will live in computer systems, in the holonet, on signals bounced through space. I intend to have it, assuming it it feels inclined to comply, present in both Casinos as an added layer of anti-slicer security, and on my own personal ship.

These are my intents. My hopes. We shall see what eventuates.