The rotating blade of meaning (4)

Arthur Young part 4 keswick pic sm

Everything is in motion… Arthur M. Young and Isaac Newton both knew that, but in different ages and different ways. Let’s take a slight detour into some basic ways of looking at one of our fundamentals – the way things move. Our search for Arthur M. Young’s ‘geometry of meaning’ will be enhanced if we can enrich our vocabulary…

Someone in the age of Newton would have said. “This chair upon which I sit is plainly still.”

We can be cleverer than that, now. We all know that our planet is rotating once per day. We may remember that the Earth orbits around its sun once per year. We can even know that the atoms from which the chair is made are themselves in constant motion, albeit within a quantum envelope which renders them solid only when they are observed. The chair is therefore in constant motion, but most of that motion is irrelevant to the scale of human life. The rotation of the Earth is not likely to upset the stability of the chair, but it would be theoretically possible to create a hyper-sensitive chair that was…

Newton did not know of atoms, though the ancient Greeks discussed their necessity. But he knew that there had to be a limit to how many times you could divide something. At that limit you would find the essence of matter. He was very adept at envisioning the practical consequences of pursuing things to their limit…

He knew that things moved differently; not just in how one thing could overtake another, but that – within how they moved – there were differences of what we now call ‘rates’. To grasp this, we need to revisit the idea of a rate. If I have a dripping tap, and it results in one gallon of wasted water, measured over an hour, then I have loss of one gallon of water per hour. That is a rate: it is one relevant number divided by another – something per something else. It is a measure of how something that changes (dynamic) behaves with respect to something else. But our dripping tap may not waste water in a uniform way. Within that hour there may be peaks and troughs in leakage due to aspects or factors not known about in our ‘averaged’ one hour period. This is important to hold in mind when thinking about ‘motion’, too.

In Newton’s time, it was known that the ‘motion’ of things had different aspects. Imagine Isaac Newton as a child playing a game whereby he used a fallen branch of a tree, suitably trimmed with his penknife, to strike stones in his garden to see how far they would fly. He would notice that such stones went from being stationary (at rest) to suddenly going as fast as they might (a maximum) before travelling through the air in an arc and falling to earth again. The motion of the stone would therefore vary from nothing (taking out the Earth’s motion) to maximum speed – as it climbed into the air; to a point where what we now call gravity caused its upward motion to cease and its downward motion to increase, even though it was still moving away in terms of distance from the child Newton in the garden. Thereafter, the grass and earth would tangle its motion and it would come to rest again.

If we measure the whole of this motion, we might simply conclude that the stone was whacked by the strong child wielding a stick and shot down the garden for a length (distance) of, say, 10 metres. If a modern time instrument had been available, we might also discover that it took five seconds to come to rest. This would be accurate as an ‘average’ of what had happened, but would tell us little of the stages of the lifecycle of that overall motion – the interesting bits!

The above motion of the stone (with the help of a modern timer) would yield a measure called the speed or velocity of the stone of as: 10/5 = 2 metres per second: distance divided by time. But that’s not what happened, except seen as a historical thing. What really happened is that when child Newton whacked the stone, it didn’t just have a constant speed; its speed changed from nothing to its maximum value, sufficient to propel it (with the correct angle of strike) into the air in its graceful, if short, arc. Thereafter it slowed and sank through the air while still travelling along the line of its trajectory – the direction in which it was whacked. After this, it landed, bounced and came to rest in a scruffy (but real) way in the tangle of grass and mud.

Aside from my borrowing of his childhood, the real Newton had the genius to realise that the first part of the motion, (from rest to its maximum) was not just speed, but an increase of speed (from nothing to its maximum) that had a different rate. This was caused by the whacking of the stout stick, which transferred its energy to the stone, slowing the stick and thrusting the stone into space. This change of speed or velocity was named acceleration, and it was seen by Newton as something different to velocity, itself. This was a breakthrough in thought and measurement, and marked Newton as a true genius. It would take hundreds of years for Newton’s discoveries to filter into the mindset of the age. Many people today have little idea what he achieved, and yet our age of powered motion is built on his discoveries and the accompanying mathematics of calculus. The “Newtonian” world is the world of classical physics, and this view of how the world operated persisted until the advent of Quantum Theory in the early years of the last century.

Returning to Arthur Young’s discoveries. Young examined the symmetry of what Newton had discovered in the following way.:

Motion begins with distance from a start-point. In our example above the stone travelled ten metres. This is simply a length, which we can call ‘L’. A length ‘L’ applied to a start point (or Origin), without consideration of its motion, simply gives us a new position.

If we want to go further and investigate the real motion of our stone, we consider the time it took to travel the distance. We can call this ‘T’. The length (L) per time (T), written L/T (length divided by time) gives us a rate called speed or velocity – example miles per hour. This ratio of L/T is a basis for all motion and reduces things to their simplest expression.

So, what about acceleration? Remember that this is an increase of velocity not distance. If my car accelerates, it is now travelling at, say, sixty miles per hour rather than fifty. The acceleration has been ten miles per hour, per hour. In other words the rate of change of the velocity.

Summarising this:

Position = L

Velocity (speed) = is the rate of change of position or distance = L/T

Acceleration is the rate of change of velocity, which is L divided by T times T. This new expression, T times T is written T squared, T with a little ‘2’ to the right of it like this: T²

Arthur Young was pursuing the fit of the science of motion to the Fourfold model of meaning we discussed in the first three of these blogs. He needed a fourth term to follow the sequence:

Length (L),

Rate of change of Length, (L/T or velocity)

Rate of change of rate of change of Length, (L/T² or acceleration)

The missing term (L/T³) would be the next in the series and would complete the integration of the human world of motion with Young’s fourfold map of universal meaning…

But there was no recognition of a fourth term (L/T³) of Length and Time in physics… Yet Arthur M. Young, creator of the modern helicopter, knew there was a commonly understood concept that matched this – he had used it to make his helicopters safe…

To be continued…

{Note to the reader: These posts are not about maths or physics; they are about a unique perspective on universal meaning created by Arthur M. Young. If you can grasp the concepts in this blog, your understanding of what follows will be deeper.}

Previous posts in this series:

Part One,   Part Two,   Part Three,

©️Stephen Tanham

Stephen Tanham is a director of the Silent Eye School of Consciousness, a not-for-profit organisation that helps people find a personal path to a deeper place within their internal and external lives.

The Silent Eye provides home-based, practical courses which are low-cost and personally supervised. The course materials and corresponding supervision are provided month by month without further commitment.

Steve’s personal blog, Sun in Gemini, is at

You’ll find friends, poetry, literature and photography there…and some great guest posts on related topics.


The Time Vampires

VampireAA - 1

It’s a tough one, this. I love technology and I have a lasting belief that it has brought us a lot of good… but a nasty feeling that we are touching some of its ‘dark edges’; brought on, not because of the technology, itself, but because of the motive for profit and dominance inherent in the power that a few mega-companies wield.

Such companies are ‘enablers’. The real threat is the big money that has seen the potential for manipulation – global manipulation.

It was a 19th century historian and Cambridge professor, John Dalberg-Acton, 8th Baronet, who said, “Power tends to corrupt, and absolute power corrupts absolutely.”

It’s a quote many of us know, but what he went on to say in the same speech is less well known; “Great men are almost always bad men…”

We all like to believe in ‘great men’ (and women). Many of the The tech giants have risen, like David against Goliath, to overturn traditional market leaders and introduce vast innovation that benefits us all. When I compare what I can do, what I can reach, what I can understand, courtesy of the Internet and its access to largely-free resources, I stand in awe of what the past few decades have brought. My writing, and my supplemental efforts as an illustrator, have all been the result of what would now be termed Tech tools.

So why the opening sentiments?

They were prompted by a quote from one of the product directors of a major Tech company. He was quoted as saying that a new breakthrough in that company’s products would help ‘use up the mind cycles’ of the young people who formed the largest proportion of its customer base. Young people are increasingly targeted by Tech companies, such as social media sites. The young see it as a natural extension to their ‘talkative’ world – a sign of belonging, a ‘cool’ skill.

It’s a very powerful ‘pull’. It also makes Tech billions…

The young and the naive fill in mock ‘surveys’: What type of doggie walker are you? With the results, Tech can sell deeply effective profiles of each person, so accurate that exact product targeting can be placed in front of them, in their favourite colours, linked to their favourite games or cartoons or literature heroes…. or other products, of course – ‘your best friends are showing how grown-up they are by eating frizzzle-joys in luscious purple….’

And then there are drugs… Drugs are what you can’t live without; habits of ‘feel good’ that, especially in the impulsive and immature young, take hold very quickly. Like gambling, or children’s computer games that require them to pay for the key to a ‘level’ that ‘all their mates’ have already achieved… “Daddy!”

Drugs don’t need to be chemicals. The body and mind can make its own.

Online gambling has grown totally out of control. Some very big names are buying up the stragglers because the profits are so vast. As are the wrecked families and the huge debts that lead to crime to ‘repay’. Social networks I can understand; gambling has always been designed to exploit those who can’t comprehend the inevitability of their suffering. And children are being targeted: ‘just click here to say you’re over 18.’

The word ‘evil’ isn’t used much any more. It should be…

From a spiritual point of view, Tech can be seen as angel or devil. It has turned the ‘globe’ into a village. But the downside is that every atom of that village is now a target for money – big money. But that’s judging it from my perspective – someone who can see the massive abuse that is taking place.

Why aren’t we doing something about the downside? Because the problem is global and we’re not. Big money in Tech doesn’t want its opposition to be global, because that would enable effective control of its excesses. Britain votes Brexit and is leaving the only institution that is really trying to clean up this mess. America lurches to the right and its president wants massively less regulation and a weaker UN. In both cases the Tech social media machines were a dominant part of the Tech used to manipulate the elections – in fact, the same Tech companies were involved on both sides of the Atlantic.

On a very simple level, I don’t want my children (grandchildren, really – my children are in their thirties) to have their ‘cycles’ stolen. I want them to have some time to think, to dream, to read and enjoy fantasy. I want them to walk through the woods and climb the hills… and create in their growing minds. I want all that to lead to an eventual awareness of the living magic in the now, to a series of questions about themselves that will begin their real search for meaning in their lives.

So, next time I read of a rich, Tech product director who wants to interfere with the core of my grandchildren’s life, I’m going to get angry…

… Oh, yes, I just did.

Stephen Tanham is a director of the Silent Eye School of Consciousness, a not-for-profit organisation that helps people find the reality and essence of their existence via low-cost supervised correspondence courses.

His personal blog, Sun in Gemini, is at

©️Stephen Tanham.


Owning the Robots? – the rise of AI


It’s 1950. The room we are in is more like an old school hall than the site of a leading-edge experiment in the definition of intelligence. Before us is a teletype – a machine that we can type on which will relay that message – and any response generated – across a telecommunications link to a similar machine somewhere you can’t see.

That’s a fairly simple requirement by today’s standards, but, back then, it was the way you communicated, remotely.

We’ve just typed a question on our end of the teletype link: “What will you do on Tuesday?”

There is a pause while our recipient in this experiment thinks about this. Then our machine chugs into action, the paper moving up one line to allow the response to appear. A bit like a old teleprinter giving a football score, the characters appear, noisily, one by one.

“Perhaps, I should ask you what you will be doing on Tuesday?” reads the message.

We smile. There is both humour and cheek in the response. For the past hour we have been trying to decide whether our correspondent on the other side of the link is a machine or not. This test for intelligence – whether we can decide the difference by talking to he/she/it – has been proposed by Alan Turing, one of Britain’s leading code-breakers of World War II, and the genius credited with cracking the Nazi’s ‘uncrackable’ Enigma encryption machine, as part of the Bletchley Park initiative.

Four years from now, Turing will be found dead from cyanide poisoning in his Manchester flat, though the apple remnants in his stomach will show no signs of the poison. Shortly before this he was required by the local police to be chemically castrated, following prosecution for practicing homosexuality. The treatment gave him breasts, and this is thought to have triggered his supposedly terminally depressed state. His flat was full of advanced computing circuitry, which was removed by the local police.

Our little test with the teleprinter never actually took place. Like many such mental devices that Turing put forward, it was a conceived as a reliable ‘thought experiment’ in logic. His proposition was:

“A computer would deserve to be called intelligent if it could deceive a human into believing that it was human.”

The success of Turing’s code breaking also led to significant moral dilemmas. It was alleged that Churchill’s wartime government faced agonised decisions when messages they intercepted resulted in knowledge of future enemy attacks. If you knew that a certain convoy of ships was being targeted, did you alert them to the danger? We would probably respond with ‘of course!’, but to do that, repeatedly, would have revealed to the enemy the fact that the codes had been broken – resulting in the critical, and potentially war-winning, advantage being lost.

What followed has never been officially verified, but has been widely spoken of. The government -and Turing was involved in this, himself – had to derive a method, an algorithm, that would randomise the interventions to an acceptable level of success. Of course, each one also had to be viewed in terms of strategic importance to the war effort and the country’s future. ‘Playing God’ was reputedly one of the expressions that Turing used at that time. It was during this era that the foundations of what became known as Game Theory were laid by mathematicians on both sides of the Atlantic. Not unlike the thinking behind cybernetics (one of the early branches of ‘robotic’ artificially intelligent behaviour), this new branch of confrontation and strategic warfare was to gain in importance until it became a key part of the planning of possible nuclear war scenarios.

Turing was concerned with what ‘behaved like it was intelligent’, understanding that, although the teletype didn’t look like a human, or even a robot, if its appearance could be masked, that wouldn’t matter. The problems generated by the cracking of the Enigma machine came about because of the success of the technology. They were not directly related to the technology, itself.

Artificial Intelligence (AI) has moved on, considerably, since the 1950s, and now poses moral dilemmas and consequences much greater than those of revealing that codes have been cracked. AI, embedded in all manner of domestic products, now makes decisions for us, computing in tiny fractions of a second what the line of least risk might be for a given situation; or what the most efficient ‘way forward’ might be…

The problems generated by the success of the technology haven’t gone away. A recent example is the logical programming of automated ‘driverless’ cars, which use AI to control themselves – something quite unheard of in Turing’s time, and viewed as fantasy even a few years ago.

Dilemmas often centre around a ‘thought experiment’, much as Turing’s test of intelligence all that time ago. One of the latest such mental devices, derived from philosophy, is known as the ‘Trolley Problem’. Imagine that you are standing beside a railway track, and that, coincidentally, there is a lever nearby that activates a set of points to change the direction of any train coming your way. You’re having a bad day and it gets worse when you turn around and see that number of people have been tied down to the main track – the normal route of the train.  Before you can adjust to the shock you turn to look at the other track  (the one that the points will activate) and see that there is another person tied there. Your attention is rudely dragged back and forwards to the main track – and you see that a runaway train is coming down the line…

You – simulating AI – have the only ‘control freedom’ possible; the train can’t be stopped and is either going to kill the group or the single person. You get to decide. Most people would switch the track-points so that fewer people got killed. But then, just before the train gets to you, your smart phone tells you that the men on track one are bad guys who’ve just robbed a bank. Now, what do you do?

This is not so far-fetched as it may first appear. Say this was a driverless car and a connected social media platform that you trusted kept a ‘good persons score’ on each satellite-tagged person in the vicinity. In the event of your car being about to smash into that car in front of your – the one with five occupants – your car would ‘know’ that the lady with a pram on the pavement, who you would kill if your car took it’s usual ‘avoid collision’ action, was a much better person, and therefore worthy of saving.

It’s horrific… but this ‘world’ is very close to where we are now as a technological society. In this example what’s happening is that the possibility to make changes in real-time is forcing us to re-examine that tiny window previously covered by the word ‘accident’.

If AI can change the outcome, then who owns the programming the AI will use?

While technology is advancing, massively, politics and humanitarianism seem to be being left far behind, leaving the rich and powerful to decide things – as they did with the large-scale manipulation of political data on both sides of the Atlantic during the past twelve months.

Next week we will go deeper into the fascinating but potentially nightmarish world of AI.

©Copyright Stephen Tanham 2017









Wire Strippers – Part Three


This series of posts have been about the manipulation of democracy, on both sides of the Atlantic, by a new generation of internet-based personal data collection tools. These tools feed on the information we – wittingly or unwittingly – provide to social media applications like Facebook and Google. We have seen how a small number of our opinions, expressed on such social media, may provide a comprehensive inference of our voting intentions, laying us vulnerable to instantly presented information – or disinformation, in the case of the past twelve months.

Equipped with products like NationBuilder, a team of people carrying out political canvassing on the doorstep will already know what your voting intentions are likely to be. Their message to you may appear to be a traditional reinforcement of a party political manifesto, but it could equally be an entirely fictitious reference to a script that has been created by organisations who use that tool to cast doubt in your mind. The tool isn’t doing this, but it is enabling those who build the message – sometimes hundreds of thousands of different messages – to falsify or subtly bend the truth, as in my experience, related in Part One of these posts.

There is no law against lying to the public, as long as you do it under the cover of a political process. That is the sad conclusion drawn by experts across the world, following the turbulent events of 2016. The UK has data protection laws, as, increasingly, has the EU. But neither country applies the law to the truth of what is given out as fact in elections. In the US, there is no single public data protection statute, though different states have their own, many of which conflict with each other. (source)

We need to consider the trade we make when we use social media. We do not pay for the use of a very sophisticated suite of applications, at least not up-front. Instead, we give the platform providers the right to use our data. Few of us ever read the small print. I understand that trade-off with advertising, having been in the software business for many years. I’m perfectly happy for Facebook or Google to show me images of what they think I might want to buy. Sometimes, I do click on them to buy something, so, it works, and allows the social media platform to make its money – but not as much money as they can make in an election year…

More sophisticated use is frequently made of our data if we respond to one of the ‘you won’t believe what this resulted in…’ type of sensationalised banner, in similar style to a tabloid newspaper. You may then be led, unknowingly, to, ‘install’ a product such as NationBuilder, which will live alongside your social media tool, collecting your political data. If you haven’t checked, click on your settings and see whether this is hitching a ride on your life… Remember, these system run on the internet, so they don’t need to download to your computer, just run alongside what you already have in some distant server farm. Again, there is nothing illegal about such software, but the use to which that data is put – the motives of the end user who wishes to engineer public opinion in a targeted sector of the population – is increasingly seen as immoral and hostile to an open and truthful political process.

The fault here does not lie with the technology. The IT business has been offering ‘data-mining’ for a generation. What is new is the willingness to shift the process of opinion-farming into dubious practice that should lie outside the law, but doesn’t.

I have watched a fascinating process for many years as new ‘elites’ are generated in society. During the 1970s and for a period of thirty years, it was IT, itself, that attracted a generation of bright minds. The latest was the growth of the power of international banking, mirroring that of the internet. In each case, a small group at the top of the pile viewed themselves as above the law. In each case, new laws were required to protect the public – the ‘little people’ to quote a popular phrase among the ultra-rich investment banking community.

It may be too late… The ability to understand the complexity of modern commercial, political and governmental processes, all of which rely on embedded computing and the internet, may be beyond all but the few. Most of these few will have been seduced by the lucrative returns they can make in commerce. I am a proponent of business, having run my own company for two decades. I consider business to have been the engine of social progress – but only as it long as it serves the common good.

What we lack are laws and funding that extend the rule of law into all such dubious aspects of society, so that electoral manipulation is as much subject to scrutiny as banking… though the latter has yet to mature, as well.

There are many voices raised in support of this. The latest is the founder of the internet, himself – Tim Berners Lee. In a recent article in the Guardian newspaper, he makes many of the points raised in this series of three posts, but with much more authority than I can. His comments on governmental mass surveillance make sober reading as well, especially in a week that has seen the Wikileaks revelations about ‘smart’ domestic devices being able to listen into our home comforts… Anyone remember the chapter in George Orwell’s 1984 where the speaker on the wall tells Winston and his love that they’d better get dressed?

Why aren’t we chilled at all this? Have we become numb to it all? Is the fear of terrorism really enough justification to change all our lives in ways that may never be reversible? Is it simply ‘bread and circuses’ – keeping the masses happy with sport and shopping while those running our lives do the ‘hard stuff’? Or, have we simply become soft? In a period of relative prosperity have we forgotten the need to struggle for the truth? History, for those few who still study it, paints a vivid portrait of what happens, next…

Philosophically, we should question those who seek to control for a living, in all its forms. Life thrives on freedom, but freedom needs laws to control its excesses. Never before have we so needed the spirit and letter of the law to be strengthened to protect human rights. Never before have we so needed that new generation of lawyers to come forward with a will to protect society and its values beyond their personal pocket.

When you see newspaper headlines accusing the country’s most senior judges of being ‘the enemy of the people’ then it’s time to worry most of all… The Law may be occasionally pompous, but it’s the best friend we’ve got.

Original content©Copyright Stephen Tanham 2017

Other parts of this series:

Part One, Part Two




Wire Strippers – Part Two


Fascism arises as a result of certain conditions. These are not limited by being ‘a long time ago’. Suggest this to anyone happy with the current political environment of Britain and the USA and they will look at you as though you are an academic who is out of touch with both popular opinion, and the ‘revolution’ of the underdog that has created the new order.

Fascism can and needs to be simply defined so that our children and grandchildren, who are not close enough to the societal memories of the second world war, can be equipped to spot its festering signature.

Fascism begins with populism. We are reminded by a usually charismatic would be leader that the system is corrupt and that we – our undervalued group in society – are being deprived of our rights by an alien minority. There has to be a minority that can bear the brunt of our group’s revenge.

Fascism believes that things can be black and white; and that any wisdom to the contrary is part of the mind-fog generated by the intellectual experts who have created the ethos of the governing elite – who themselves are, of course, corrupt, liberal or not.

Once elected, fascism uses its initial popularity to shut down or obfuscate the mechanisms of opposition using one or more forms of brutality. Increasingly, this brutality is of the mind. As long as it can continue repeating its ‘you’ve been robbed’ message, it knows it will get enough time to silence the truth. Truth and facts are always the first target of this form of mental derangement made politics.

Appearing to strengthen the society in which it grows, such twisting of the real begins to weaken the country it infests, leaving it vulnerable to international forces who are less concerned with violence of the mind and more free to use outright military force. Defending the country from real war is anything but simplistic.

Mussolini, the leader of Italy during the second world war, was a classic fascist. He rose to prominence and control of Italy using all the above techniques. His thugs silenced opposition. He was a charismatic man with an intense appetite for both food and sex, and is reported to have ‘needed’ several women a day. Recognising that, by itself, Italy could not be strong enough, militarily, he sided with Nazi Germany, which eventually ate him.

At the end of the war, he fled and was arrested, in Sicily, when a mob presented him, freshly beaten up, to a mid-ranking British officer who was part of the Allies’ liberation force and had wandered into a town square. That man, ironically enough, was Alan Whicker, who went on to become an internationally-respected television journalist, gaining his own notoriety by interviewing several of the world’s most powerful dictators – at a time when to do so would normally have result in your death.

In the first post in this series, I described how a Facebook page was used as a mechanism to present me with a deliberately misleading advert that was targeted at a certain group of voters, just prior to the UK’s Referendum in June 2016. Further research and the publishing by several UK newspapers of the larger process of which my incident was a tiny part showed that there had actually been hundreds of different ‘messages’ used by the campaign, which was a key part of’s successful strategy to take Britain out of the EU, backed by the right wing of the Conservative Party and the extreme right UKIP. Each of those – millions of adverts – had been targeted specifically at people ‘tagged’ as potentially swing voters – the target group where the power lies in any democratic election.

The identification of this group, including (incorrectly) me, was derived by a highly sophisticated system of data analytics created by a UK company, Cambridge Analytica (CA), in which American hedge-fund billionaire Robert Mercer now has a major interest. During the recent US elections, Mercer and his data-mining technology switched from Ted Cruz’s campaign to back Donald Trump’s when the former began to falter. They were said to have a significant effect on the election’s result.

There is nothing illegal about data mining, nor, currently are there limitations on the use of such data during elections, though that is now being challenged. Wikipedia’s entry on the US arm of Cambridge Analytica includes its referenced competence in military disinformation campaigns, social branding and voter targeting.

Again, there is nothing illegal about these operations, but they don’t work in isolation.

To carry out such penetrative campaigns you need to have data to feed the data-mining algorithms. Recent developments, seemingly originating in work carried out at Cambridge University, revolved around how to infer character and voting intentions from social media data and are summed up in Wikipedia’s entry on this topic, which states that Cambridge Analytica is using psychological data derived from millions of Facebook entries. Elsewhere, the same source says that research at Cambridge has shown to how easy it is to infer voting intentions from a relatively small number of posts.

The history of these events, and the technology behind them are summarised by the Guardian  in this article.

Those are our Facebook posts, shared with our friends. Are our simple communications being used, without our permission, for such data mining? It make take governmental intervention to uncover the scale of this, but, thankfully, this is now being initiated, at least in the UK.

We may also need some geniuses on the side of the people, as well.

This is technology that is changing how we are governed and what agenda is used to appeal to the ‘underdogs’ in a society, sweeping aside shared values that have maintained peace and relative harmony since we emerged from the smouldering ruins of the major wars of the last century.

Prompted by reports of this in the Guardian, (one of the newspapers banned from the recent White House briefing, along with the BBC, CNN and several US Newspapers) there is now an official investigation by the UK’s privacy watchdog. But it has come too late to alter the populist course on which we now seem to be set.

Facebook has recently said it will review the use of such data and that it is opposed to political manipulation using its information as a basis. But, be careful next time you’re tempted to respond to ‘What kind of archangel are you?’ on your favourite social media… you never know who has paid to gather that data and is processing it to find out much more about you than you thought you were revealing…

Next week, in the final part of this series, I will step back and examine the underlying forces, political and psychological, that have caught us, unprepared, for the use of this data on a massive scale.

Continued in Part Three

Original content©Copyright Stephen Tanham 2017




Wire Strippers – Part One


I remember the moment when I began to suspect that a new generation of ‘wire strippers’ were at large in our internet wiring.

It was June 2016 – the month of the the UK’s Referendum on remaining in the European Union (EU). I was reading through postings by a good cross-section of my friends on social media.

The opinion polls were showing that the vote was projected to be neck and neck. There was great tension in the country, with internal divisions that were splitting both families and businesses down the middle.

It was as ugly a political period as any I remember and it made me realise how much was at stake – and how many large organisations were part of the persuasion process…

One day, a large part of my Facebook screen was replaced with an advert that appeared to be on on behalf of the Labour party – only the message was wrong. I knew that, despite the lacklustre performance of that party’s leader on the subject of Europe, the official party line was to be part of the Remain camp. However, the advert, which was devious in the extreme, quoted the party leader’s old statements on the subject, from his pre-leadership anti-European days. The whole advert was dressed as an official Labour plug. Unless you definitely knew otherwise, it told you, convincingly, that the Labour position on Europe was negative…

I asked around, but no-one else had seen it, which I found strange. As a pro-European I knew the effect of that ad would be significant, especially as I suspected that a mobilised Labour vote, plus the majority of the Tory party, would comfortably swing a majority behind Remain. I felt considerable anger that someone was twisting the truth in a very sophisticated way.

And then it occurred to me that the reason no-one else in my small sample had seen it was that I had been part of a group that had been targeted for a a very specific, negative message.

Could that be? If that were true then the data analysis used by such social media systems had radically changed in a short time.

Social media is a mixed blessing. I find it a very powerful means of communication, and we tend to forget that we do not pay for it… well, not in money up front, anyway. It’s not an authoring package – WordPress is much stronger in that respect – but it is a good ‘short message’ mechanism. As a WordPress blogger, I quickly learned that posting a link to a WordPress piece on Facebook produced little interest – the recipients expected ‘home grown’ stuff. Fair enough, I can understand. There is a certain homogeneity of what works on each platform.

The Referendum proceeded. Culminating in two weeks during which the most deceitful  and cynical promises were made to a pre-targeted section of the British population; so successfully that even those making them were left visibly surprised (not to mention policy-less) by the result.

At that point, I had no idea of the concert of targeted data manipulation that had been played out to the UK electorate. Nor the international reach of those involved.

Continued in Part Two

©Copyright Stephen Tanham 2017