Skip to main content

The Darkness of Big Data - II



The promises of big data are convincing. Organizations from across every major industry are using data mining techniques and data bases as competitive differentiators to:

  • Detect fraud and cyber-security issues
  • Manage and eliminate risk
  • Anticipate resource demands
  • Increase response rates for marketing campaigns
  • Discern voter preferences for election campaigns
  • Solve today’s toughest big data challenges


Believing these sound wonderful, the concerned citizen needs to explore applications that have a downside and must be addressed very soon.  We are not about nefarious designs but rather a creeping invasion into human life.

On mental health and beyond.  Through big data, we will soon witness the development of personal counselor programs.  Already such programs have existed for a half-century.  The first example, consider ELIZA, a computer program operated by processing users' responses to scripts, the most famous of which was DOCTOR.  It was written in about 1965 by Joseph Weisenbaum at the MIT Artificial Intelligence Laboratory.  It used little information about human thought or emotion, yet was startlingly interactive.  People were convinced Eliza was a real person.   https://en.wikipedia.org/wiki/ELIZA

 If such code is online, like for free, the user will reveal their deepest secrets in exchange for comfort, conciliation, or a little understanding.  Such code will become like a close friend, in many cases the only friend.  This information will be extensively mined, and then used in fashions we can only guess.   This process will accelerate when psychologists and the like will be required to put their notes online, ostensibly for insurance purposes.   It will be promised, at first, to be highly confidential.  This sincere promise will not sustain.  

It is not cynical to say these days, that every single thing can be hacked.  Already, tax, medical, and credit information is hacked on a daily basis.  We use to believe they (who’s they?) will figure a way to protect us.  So, like the ants in the forest and the wildebeests on the Serengeti plane, we hope for safety in numbers, that our data will not be selected for invasion. 

The codes to “read” writing and to interpret are already well developed.   It is not that the codes actually understand what you write, but they know how to respond.  It's like when a friend relates something you don't get.  In most cases, you only know (i.e., have learned) what to say. Such codes will greet you when you log in, ask you about your day, how’s the family, your dog,  and other items it has learned you like.  You will almost be convinced it cares about you.  Yet, this is just what your shrink does now.  He/she learns to begin the session with things you care about.

You say… yes but.  The computer is too impersonal and too cold.  It can’t convince.  You know it is a merely a machine.   Remember Eliza?  To repair this, we now available codes that will show you a face or avatar with lip synced animation and voice – real time no less.  Well-developed are these.  It will learn to talk to you in a way you prefer.   Can you imagine how such code can be used to manipulate you, change your politics, buying preferences, or even radicalize you?

And now the beyond.  Regular users of social media, such as Facebook, are already well analyzed, and that is only from passive data mining.  There is more to come.  For example, if I needed to find 20,000 folks that seem to be Republican but have many Democratic tendencies (or vice-versa), I could order this up from the big data business managers, maybe at five bucks a piece, and then carefully engage them, and convert them to another belief.  There is big money here. Eventually, the machines would learn to do this – all at a price per subject.  The parties will be regular customers at this store.  The media, bloggers, pundits, and writers will become secondary resources.  For kids, even parents will assume a lesser role.   To a great extent, this comprises social engineering gone wild. 

It will not be all bad, but the badness potential electrifies our comments.  How this is achieved technically is postponed to another day, but the favorite techniques such as random forests and neural nets are remarkably simple to understand and apply.

Such software will be infinitely patient and always available. It will be used to counsel prisoners, school children with problems, the terminally ill, agoraphobics, the love-lost, psychopaths, and the like.  It may provide help for autism, a condition that has been most resistant to almost every clinical approach.   It will be used to test for compliance, attitude, loyalty, obedience, honesty, or any quality desired sought.  It will be used to decide on making hires, paroles, and promotions.   Not long away with be the commercials and promotions you see on your favorite shows, specifically targeted to you. Netflix already does this.

The purpose, originally, will be promoted as benign.   Eventually, the codes will be able to test for anything, and deception will be all but impossible.  Even now, if you take a basic survey with enough questions, you will not be able to conceal anything targeted. Books will be written to assist participants from giving untoward revelations. 

Below is a simple illustration of a future hiring process.  Almost everything is online, and all of the data is stored and then used not only with regard to the applicant but to newer applicants downstream.



The online application → The pre-interview → The actual interview → You get the job →  But then…

You are asked to sign releases to previously collected information, e.g.
Facebook, Twitter, Schools data, and more

→ Pre-employment orientation → Orientation → Office climate interview → Wellness interview → End of probationary period evaluation.  Now do you keep the job?



----------------------------------------------------------------------------------------

All of this is underway, and our government hasn't a clue about it.  Who know what will happen if it does?

Comments

Popular posts from this blog

Behavioral Science and Problem-Solving

I.                                       I.                 Introduction.                Concerning our general behavior, it’s high about time we all had some understanding of how we operate on ourselves, and it is just as important how we are operated on by others. This is the wheelhouse of behavioral sciences. It is a vast subject. It touches our lives constantly. It’s influence is pervasive and can be so subtle we never notice it. Behavioral sciences profoundly affect our ability and success at problem-solving, from the elementary level to highly complex wicked problems. This is discussed in Section IV. We begin with the basics of behavioral sciences, Section II, and then through the lens of multiple categories and examples, Section III. II.     ...

Where is AI (Artificial Intelligence) Going?

  How to view Artificial Intelligence (AI).  Imagine you go to the store to buy a TV, but all they have are 1950s models, black and white, circular screens, picture rolls, and picture imperfect, no remote. You’d say no thanks. Back in the day, they sold wildly. The TV was a must-have for everyone with $250 to spend* (about $3000 today). Compared to where AI is today, this is more or less where TVs were 70 years ago. In only a few decades AI will be advanced beyond comprehension, just like TVs today are from the 50s viewpoint. Just like we could not imagine where the video concept was going back then, we cannot really imagine where AI is going. Buckle up. But it will be spectacular.    *Back then minimum wage was $0.75/hr. Thus, a TV cost more than eight weeks' wages. ------------------------- 

Principles of Insufficiency and Sufficiency

   The principles we use but don't know it.  1.      Introduction . Every field, scientific or otherwise, rests on foundational principles—think buoyancy, behavior, or democracy. Here, we explore a unique subset: principles modified by "insufficiency" and "sufficiency." While you may never have heard of them, you use them often. These terms frame principles that blend theory, practicality, and aspiration, by offering distinct perspectives. Insufficiency often implies inaction unless justified, while sufficiency suggests something exists or must be done. We’ll examine key examples and introduce a new principle with potential significance. As a principle of principles of these is that something or some action is not done enough while others may be done too much. The first six (§2-6) of our principles are in the literature, and you can easily search them online. The others are relatively new, but fit the concepts in the real world. At times, these pri...