Skip to main content

The Darkness of Big Data - II



The promises of big data are convincing. Organizations from across every major industry are using data mining techniques and data bases as competitive differentiators to:

  • Detect fraud and cyber-security issues
  • Manage and eliminate risk
  • Anticipate resource demands
  • Increase response rates for marketing campaigns
  • Discern voter preferences for election campaigns
  • Solve today’s toughest big data challenges


Believing these sound wonderful, the concerned citizen needs to explore applications that have a downside and must be addressed very soon.  We are not about nefarious designs but rather a creeping invasion into human life.

On mental health and beyond.  Through big data, we will soon witness the development of personal counselor programs.  Already such programs have existed for a half-century.  The first example, consider ELIZA, a computer program operated by processing users' responses to scripts, the most famous of which was DOCTOR.  It was written in about 1965 by Joseph Weisenbaum at the MIT Artificial Intelligence Laboratory.  It used little information about human thought or emotion, yet was startlingly interactive.  People were convinced Eliza was a real person.   https://en.wikipedia.org/wiki/ELIZA

 If such code is online, like for free, the user will reveal their deepest secrets in exchange for comfort, conciliation, or a little understanding.  Such code will become like a close friend, in many cases the only friend.  This information will be extensively mined, and then used in fashions we can only guess.   This process will accelerate when psychologists and the like will be required to put their notes online, ostensibly for insurance purposes.   It will be promised, at first, to be highly confidential.  This sincere promise will not sustain.  

It is not cynical to say these days, that every single thing can be hacked.  Already, tax, medical, and credit information is hacked on a daily basis.  We use to believe they (who’s they?) will figure a way to protect us.  So, like the ants in the forest and the wildebeests on the Serengeti plane, we hope for safety in numbers, that our data will not be selected for invasion. 

The codes to “read” writing and to interpret are already well developed.   It is not that the codes actually understand what you write, but they know how to respond.  It's like when a friend relates something you don't get.  In most cases, you only know (i.e., have learned) what to say. Such codes will greet you when you log in, ask you about your day, how’s the family, your dog,  and other items it has learned you like.  You will almost be convinced it cares about you.  Yet, this is just what your shrink does now.  He/she learns to begin the session with things you care about.

You say… yes but.  The computer is too impersonal and too cold.  It can’t convince.  You know it is a merely a machine.   Remember Eliza?  To repair this, we now available codes that will show you a face or avatar with lip synced animation and voice – real time no less.  Well-developed are these.  It will learn to talk to you in a way you prefer.   Can you imagine how such code can be used to manipulate you, change your politics, buying preferences, or even radicalize you?

And now the beyond.  Regular users of social media, such as Facebook, are already well analyzed, and that is only from passive data mining.  There is more to come.  For example, if I needed to find 20,000 folks that seem to be Republican but have many Democratic tendencies (or vice-versa), I could order this up from the big data business managers, maybe at five bucks a piece, and then carefully engage them, and convert them to another belief.  There is big money here. Eventually, the machines would learn to do this – all at a price per subject.  The parties will be regular customers at this store.  The media, bloggers, pundits, and writers will become secondary resources.  For kids, even parents will assume a lesser role.   To a great extent, this comprises social engineering gone wild. 

It will not be all bad, but the badness potential electrifies our comments.  How this is achieved technically is postponed to another day, but the favorite techniques such as random forests and neural nets are remarkably simple to understand and apply.

Such software will be infinitely patient and always available. It will be used to counsel prisoners, school children with problems, the terminally ill, agoraphobics, the love-lost, psychopaths, and the like.  It may provide help for autism, a condition that has been most resistant to almost every clinical approach.   It will be used to test for compliance, attitude, loyalty, obedience, honesty, or any quality desired sought.  It will be used to decide on making hires, paroles, and promotions.   Not long away with be the commercials and promotions you see on your favorite shows, specifically targeted to you. Netflix already does this.

The purpose, originally, will be promoted as benign.   Eventually, the codes will be able to test for anything, and deception will be all but impossible.  Even now, if you take a basic survey with enough questions, you will not be able to conceal anything targeted. Books will be written to assist participants from giving untoward revelations. 

Below is a simple illustration of a future hiring process.  Almost everything is online, and all of the data is stored and then used not only with regard to the applicant but to newer applicants downstream.



The online application → The pre-interview → The actual interview → You get the job →  But then…

You are asked to sign releases to previously collected information, e.g.
Facebook, Twitter, Schools data, and more

→ Pre-employment orientation → Orientation → Office climate interview → Wellness interview → End of probationary period evaluation.  Now do you keep the job?



----------------------------------------------------------------------------------------

All of this is underway, and our government hasn't a clue about it.  Who know what will happen if it does?

Comments

Popular posts from this blog

Accepting Fake Information

Every day, we are all bombarded with information, especially on news channels.  One group claims it's false; another calls it the truth. How can we know when to accept it or alternatively how can we know it's false? There are several factors which influence acceptance of fake or false information. Here are the big four.  Some just don’t have the knowledge to discern fact/truth from fiction/fact/false*. Some fake information is cleverly disguised and simply appears to be correct. Some fake information is accepted because the person wants to believe it. Some fake information is accepted because there is no other information to the contrary. However, the acceptance of  information  of any kind become a kind of  truth , and this is a well studied topic. In the link below is an essay on “The Truth About Truth.” This shows simply that what is your point of view, different types of information are generally accepted, fake or not.   https://www.linkedin.com/posts/g-donald-allen-420b03

Your Brain Within Your Brain

  Your Bicameral Brain by Don Allen Have you ever gone to another room to get something, but when you got there you forgot what you were after? Have you ever experienced a flash of insight, but when you went to look it up online, you couldn’t even remember the keyword? You think you forgot it completely. How can it happen so fast? You worry your memory is failing. Are you merely absent-minded? You try to be amused. But maybe you didn’t forget.   Just maybe that flash of insight, clear and present for an instant, was never given in the verbal form, but another type of intelligence you possess, that you use, and that communicates only to you. We are trained to live in a verbal world, where words matter most. Aside from emotions, we are unable to conjure up other, nonverbal, forms of intelligence we primitively, pre-verbally, possess but don’t know how to use. Alas, we live in a world of words, stewing in the alphabet, sleeping under pages of paragraphs, almost ignoring one of

Is Artificial Intelligence Conscious?

  Is Artificial Intelligence Conscious? I truly like the study of consciousness, though it is safe to say no one really knows what it is. Some philosophers has avoided the problem by claiming consciousness simply doesn’t exist. It's the ultimate escape clause. However, the "therefore, it does not exist" argument also applies to "truth", "God", and even "reality" all quite beyond a consensus description for at least three millennia. For each issue or problem defying description or understanding, simply escape the problem by claiming it doesn’t exist. Problem solved or problem avoided? Alternately, as Daniel Dennett explains consciousness as an account of the various calculations occurring in the brain at close to the same time. However, he goes on to say that consciousness is so insignificant, especially compared to our exalted notions of it, that it might as well not exist [1] . Oh, well. Getting back to consciousness, most of us have view