Robert Elliott Smith's Blog
August 28, 2021
Fake New Fighters at The British Science Festival
Looking forward to being on a panel with Gina Rippon and Gavin Evans at The British Science Festival, Europe’s longest-established science festival, which will bring 100+ events to Chelmsford this September! Tickets are FREE, but limited. Book yours now! #BSF21
May 20, 2021
AI and HR: the clue's in the name (Newsweek Interview)
AI will impact us in many areas, and HR (human resources) is a big one. The clue is in the name: this is about humans (although I must admit I don’t like the idea of people being treated as mere resources). Anyway, I’ve been working with Kevin Butler, an expert on HR software, in a company called Centigy, trying to provide means for AI to be certified to treat people in fair and unbiased ways pursue and progress in their jobs. This will be really important, particularly since regulatory bodies are beginning to pay attention, with HR being identified as one of 5 “high risk” areas for the application of AI in the EU. This will affect AI companies everywhere since the regs will apply to anyone who wants to do business in Europe.
Newsweek picked up on what Kevin and I are trying to do at Centigy and did a really great interview that came out today. Please have a read; it’s really informative about something that I think is really important.
May 13, 2021
GCS Connect Leaders Series Interview
I had a really interesting conversion with of GCS Recruitment in their leadership series, which is available both as a video and as a podcast. It was great to try and connect issues of leadership to ideas about AI. Have a look or listen if that’s your bag!
April 28, 2021
New Paper: dispelling some AI Hype for the Defence Community
As a co-author with my Ravi Ravichandran (Vice President & CTO for Intelligence & Security at BAE Systems) and Chee-Yee Chong, I am proud to announce that we have a new paper out, entitled Artificial intelligence and machine learning: a perspective on integrated systems opportunities and challenges for multi-domain operations. The paper was presented by Ravi at the SPIE DEFENSE + COMMERCIAL SENSING conference this month and is published in the proceedings volume entitled Artificial Intelligence and Machine Learning for Multi-Domain Operations Applications III.
I think it’s a great paper for the defence community, as it places AI in a more historical context and the context of real-world defence systems. I think it’s essential in developing such systems that we get the AI Hype out of the way and focus on helping people make the real, hard decisions involved in defence. Ravi and Chee did a great job on this paper, and I’m glad to have contributed to it as well.
You have to purchase the paper to read it or watch the presentation (organisations that run conferences have to pay for themselves somehow!), but if this topic is of interest to you, you might want to!
April 12, 2021
We and AI Response to the UK Gov Race Report...
Click here to read a response to the UK Governments Race and Ethnic Disparities Report 2021 by We And AI, a group for which I'm proud to serve as a trustee. Click through to read the full response, which highlights shocking elements of the report, which has probably done more damage than good. In my opinion, the government should withdraw this report, and put out something that actually reflects the realities of structural racism in the UK (including algorithmic bias). We and AI were amongst the many groups that offered evidence, but were effectively ignored in the report that that was issued
January 25, 2021
Of Zuck and Trump, on Rising Up with Sonali
Just now, I was privileged enough to be interviewed by Sonali Kolhatkar for her great syndicated radio and TV show, Rising Up with Sonali. We talked about Mark Zuckerberg having called in the Ethics Panel Facebook established back in May 2020 to adjudicate the decision to indefinitely suspend one Donald J. Trump's FB account after the January 6 insurrection in Washington D.C. The Trump ban was certainly the right move, but it’s unclear that this move is towards the essential social media regulation that may eventually improve social discourse.
You can hear our conversation on KPFX (90.7 FM, Los Angeles) and various other radio stations today (Jan. 25, 2021), tomorrow on Free Speech TV.
January 21, 2021
New Podcast, Connecting Rage to Safety
I’m pleased to have been featured in the Embracing Differences with Nippin Anand podcast, in an episode entitled Artificial Intelligence: Understanding The Bias Built Into Machines. A unique direction that the conversation goes towards is the issue of safety. Nippin is an expert of accident reporting, particularly around seagoing vessels. He and I share observations about how the schematization of communication, both in AI systems and safety reporting, obscure human meaning. Have a listen, and I hope you enjoy it!
December 18, 2020
For my Portuguese-Reading Fans
For my many Portugeuse-reading followers, there's an interview with me in Exame today:.
November 26, 2020
Investigating the Invisible (new podcast series)
I’m really proud to be in the first episode of a new podcast series called “Investigating the Invisible”, hosted by Kevin Butler (of Centigy, a firm that’s advising companies on the benefits and risks of AI in HR) and Peju Oshisanya (of BenevolentAI, a company focused on using AI to find better medicines, while avoiding bias effects).
Even prouder because this first episode - entitled AI and Bias: is AI Biased (and is that our fault)? - also features Angela Saini, whose books Inferior: How Science Got Women Wrong and Superior: The Return of Race Science (along with her great, related series with Adam Pearson on The BBC, entitled Eugenics: Science’s Greatest Scandal) are amongst my highest recommendations.
If you find our episode interesting, you should check out the rest of the series, including two other episodes that were released simultaneously. Episode 2 is called Will AI Take Everybody’s Jobs?, and features Kevin and Peju in conversation with Deepak Paramarand (from Hitachi) and Jeff Wellstead (of Big Bear Partners). Episode 3 is called Ask Phillip, with Phillip Hunter, founder of CCAI Services, who has spent 25 years in product, strategy, design, and optimization for AI-powered automation, conversational and collaborative AI.
Investigating the Invisible is a project of We and AI, a new NGO / charity (of which I am proud to be a founding Trustee) that is dedicated to increasing public awareness and understanding of AI in the UK (with a particular near-term emphasis on coping with racial bias in/with AI).
If you find that exciting, check out the episodes linked above (which you can find on all the standard podcast platforms), or have a quick listen to this 3-minute trailer.
October 31, 2020
Disinformation Inoculation: How you can act now to stop US Election Chaos
I’m not going to bury the lead here: there’s information you need to share, now, particularly with your friends online with whom you most disagree. It could help save US democracy. That information is that the outcome of the US election is unlikely to be fairly known on Tuesday night, or even Wednesday morning.
Read on to find out why spreading this information, particularly beyond your own filter bubble, is vital.
You can rarely see a pandemic coming, down to within hours of it seeping into a population. But we have such a prediction today. And I believe that you can act now to infection’s spread and potential devastation.
This pandemic is one of US election disinformation that could push America over the edge, into chaos, wrecking structures, norms, and institutions in ways that we can’t even understand yet. Here’s why it could happen, why we know when it may start, and how your actions can help smother it before it takes hold.
The rare prediction is in a report from The Guardian, published just today. In essence, this US election is unique. Because of Covid19, and massive early voting in America, vote counting might take longer than usual. Since Covid19 causes more concern in densely populated areas, and because such areas are where many enthusiastic new voters are concentrated (young people, new citizens, etc.), city precincts are likely to report their votes later than usual, perhaps days later.
Combine that with the massive urban/rural divide in American political discourse: cities are far more aligned with Democrats, and rural areas go more for the GOP. Add in that the GOP candidate has made strong statements that cast doubt on the validity of the American voting process this cycle. All these signs point to the following scenario:
On Tuesday night, the precincts counted could look like a win for Trump, but it might take days to discover that this is in fact, not the case.
To understand how this could sow disinformation, it’s important to realize that it’s not uncommon for American news outlets to call a state as having been won by a candidate with very few precincts reporting. It’s a questionable practice at the best of times, but one could, in the past, justify it with various statistics, exit polls, etc. But we all know that such mathematically guided guess can be misleading, and even manipulative in the wrong hands. It’s not hard to see that such manipulation is more likely now than ever. And social media is sure to play a major part.
So, it’s likely that a disinformation storm online will happen soon after elections close on Tuesday night. I could cause chaos, maybe even violence. And we know it is coming, for once. So, what can we, the people, do about it?
The answer is for us all to act now, creating an online truth inoculation event, today. This is in line with the comments of Tom Ridge, the Republican Governor of Pennsylvania, and DHS Secretary under George W. Bush, who has said: “We’ve hopefully begun to inoculate and educate Americans around the necessity of patience so that every vote can be counted.” But have we? And more importantly, has that message broken out of today’s ever-present online filter bubbles?
Drawing on the research I discussed in my book, Rage Inside the Machine, we know that filter bubbles are an inevitable consequence of social network dynamics. Online messages rarely cross the dividing lines between polarised political tribes. But if we want substantial herd immunity to disinformation on election night (and in the days of counting that follow), we need as many people as possible to get the message that, this year, we need the patience to make sure every vote is counted, if we want democracy to work and continue.
How do we break this message out of the filter bubble? Our research shows that it won’t happen naturally under the configuration of current social media information personalisation algorithms. But the algorithms aren’t the only actors in social media systems. You are an actor. And you can help. Particularly those of you on or near the edge of a filter bubble. That is, those of you who are connected (or can reconnect) to friends with whom you do not politically agree.
So, here’s the thing: push the following message to all your social media friends, especially those with whom you politically disagree! Pushing this information over the edge of filter bubbles is the key to rapidly overcoming polarisation and the effectiveness of the disinformation storm that is likely to start Tuesday night.
Here’s the (non-partisan) message to share, as much as you are able, over and over again:
1) We can’t expect all the votes to be counted on election night this year, due to Covid19.
2) We won’t know the real winner for days after the election.
3) If we care about democracy, we have to be patient and ignore any early declarations of any candidate having won.
4) If we are patient, we will eventually have a true result, preserving and sustaining the US democracy that we all want to believe in.
I suggest pushing this message and continuing to push it from now until every vote is fairly counted. A relentless, filter-bubble-breaking cascade of truth could overcome this rare, predictable disinformation event.
Please share.
Robert Elliott Smith's Blog
- Robert Elliott Smith's profile
- 5 followers

