Jump to ratings and reviews
Rate this book

The Future of Leadership: Rise of Automation, Robotics and Artificial Intelligence

Rate this book
Is Artificial Intelligence (AI) our greatest existential threat? Will AI take your Job? Is Privacy dead? Is Universal Basic Income a viable strategy or just a temporary bandage? Will AI solve all our problems? Will it make us happier? We can’t put the genie back in the bottle once it’s out. If we don't candidly answer the pertinent questions, we will only paint a false picture. We are standing at a crucial and pivotal point in history. It’s time for diversity in AI. This unprecedented technology will affect society as a whole and we need individuals from diverse disciplines and backgrounds to join the discussion. The issues surrounding AI can’t be left to a small group of scientists, technologists or business executives to address. Our future and our children's future are at stake. More than ever, we need leaders who will stand on integrity and who will put people first. Do you want to take a glimpse into the future of leadership? The Future of Rise of Automation, Robotics and Artificial Intelligence offers the most comprehensive view of what is taking place in the world of AI and emerging technologies, and gives valuable insights that will allow you to successfully navigate the tsunami of technology that is coming our way.

296 pages, Paperback

Published October 6, 2017

198 people are currently reading
692 people want to read

About the author

Brigette Tasha Hyacinth

8 books69 followers
Dr. Brigette Hyacinth is an international keynote speaker, bestselling author and thought leader on Leadership, HR and AI. Dr. Hyacinth is the founder of Leadership EQ. She is also an Independent Board Director.

She has been featured among the Top 100 HR Influencers. Dr. Hyacinth has been ranked: Top Leadership and International Keynote Speaker of 2025. She is a sought-after advisor for global conferences and Fortune 500 companies and has traveled to five continents, including 100+ countries, to share her expertise. Her work emphasizes the importance of emotional intelligence, people-first leadership, and adapting to technological change in the modern workplace.

Moreover, Dr. Brigette Hyacinth is one of the top 20 most followed people on LinkedIn, with over 4 million followers.

Ratings & Reviews

What do you think?
Rate this book

Friends & Following

Create a free account to discover what your friends think of this book!

Community Reviews

5 stars
38 (29%)
4 stars
37 (28%)
3 stars
32 (24%)
2 stars
17 (13%)
1 star
6 (4%)
Displaying 1 - 7 of 7 reviews
Profile Image for Tucker.
Author 28 books226 followers
February 4, 2018
AI is coming whether we're ready for it or not, Hyacinth says. She cited a prediction from "Oxford University’s Future of Humanity Institute, Yale University" that predicted that within a decade "AI will outperform humans in truck driving, language translation, and writing highschool essays" while already "insurance underwriters and claims representatives, bank tellers and representatives, financial analysts and construction workers, inventory managers and stock listing, taxi drivers, and manufacturing workers jobs are coming into extinction." This will transform the job market, so there's no choice but to embrace it and to carefully consider one's job skills to stay relevant to the current moment. Relevant skills for humans include "complex tasks such as persuading or negotiating. Communication, emotional and social intelligence, creativity, innovative thinking, empathy, critical thinking, collaboration, and cognitive flexibility will become the most sought-after abilities." Hyacinth says she is interested in the human motivations that will drive the rise of robots more so than in the technology itself.

I appreciated that she flagged how the biases of human programmers end up migrating into AI systems. This is illustrated in multiple scenarios: A computer programmed to judge beauty decided that only white women were "beautiful." Another tended to associate pleasant words with European names over African names. Résumés of European-Americans were accepted over those of African-Americans. Crime prediction software rated black criminals more likely to reoffend than white criminals. Chatbots, too, are designed to copy whatever they hear people saying. Some of the bias might be introduced inadvertently; for example, in designing AI to maximize behavior that "just helps sell more products," we might end up with a program that concludes that bigotry sells and that was never told to see any problem with that. Indeed: "If human well-being is not included in the optimization and internal model of reality, AI may become a danger." Thus, a weakness in Deep Learning is that it "considers truth simply as what it spots more frequently in the data, and false as what’s statistically more infrequent." Furthermore, most of the platforms are "being trained on a bed of English language semantics." While Hyacinth doesn't elaborate on what cultural assumptions and limitations are inherent to English, we can well assume that there are many.

A couple minor flaws: From beginning to end, the self-published book contains frequent minor errors of grammar (especially verb tense and conjugation) and punctuation (especially commas). The manuscript probably ran through spell-check but it needed to be kicked around lots more by a human editor. In general I do get extremely distracted by this, not only because it disrupts my comprehension of individual sentences but because it makes me wonder what else that I haven't noticed would have been changed if there had been more rigorous footballing. Hyacinth's occasional use of the masculine word "man" as a stand-in for universal humanity is strange for someone who worries about how computers will interpret sexist language. The book is also peppered with clip art that looks as if someone ran a Google image search for "futuristic intelligence." This art is decorative and not informational. All these lapses remind me of another industry that AI won't soon be taking over.

An occasional frustration: She sometimes namedrops incidents and doesn't elaborate on them, such as "Have a look at what happened to Microsoft’s chatbot 'Tay' on Twitter" or "Recently, a man in the US appealed his jail sentence, on the grounds it was handed to him by a robot (robojudge)." Well, I didn't buy a book to be told to run a search on Twitter (although maybe that's the future of learning, who knows), and I can't even run an accurate search about the robojudge case appeal without a who-when-where, much less can I guess why the author felt it was important and what the implications are.

More severely, about halfway through the book, she writes:
"AI: Its intelligence is just artificial. The truth is AI is never going to replicate man’s consciousness, because God breathed the spirit of self-awareness into mankind. There is no need to hype and scare people from AI. A program code will never replicate what God created, no matter how sophisticated. They are tools. An imitation of a fearfully and wonderfully created living breathing soul. It cannot feel and has no soul or heart. We cannot infuse life, or spirit into a robot or any other form of artificial life. It is dead, and without man’s input it is just scrap metal."

There's all kinds of problems with this paragraph. First, we already know that artificial intelligence is artificial. Why use the phrase "just artificial" to downplay or denigrate it when the whole question of the book is to what extent artificial intelligence can match or exceed natural intelligence? If "years of research is still required to reach a level where machines can," for example, "learn by themselves and come up with solutions without human help," that doesn't mean it will never happen. It is also possible that humans could become dumber, as Hyacinth acknowledges: "We are becoming far too dependent on others thinking for us instead of us thinking for ourselves. Then there are some who don’t have the inclination to consider, question, and reflect. We are not asleep but our consciousness is being slowly subdued." Second, she does not acknowledge that God is very infrequently discussed in technology and business and that her theological opinion might be in a minority and that, even if we assume that God created humanity, this does not in itself give us reason to assume that natural intelligence (which has been gradually augmented by the increasingly complex programming we've created) cannot match or exceed divine intelligence and therefore that humans can't create robots whose intelligence is on a par with that of humans. Third, even if we assume that God exists, why should we assume that the human "soul or [metaphorical] heart" exists and how do we know it can't be replicated in a robot? Moreover, why do all of these concerns suddenly appear halfway through the book? In light of this, I am mystified by her prediction that "people will value their relationships with chatbots and robots more than human beings. They will grow attached to them and consider them as real persons." Is she saying that people will be fooled by robots — that it's obvious to her that robots aren't people, but for some reason it's not obvious to everyone? Aren't people fooled in human relationships, too? So how does being fooled prove that robots aren't people?

I am puzzled by this comment: “The worst is to realize that the vast majority of people do not even have enough level of consciousness to understand the short, medium, and long-term consequences. Here we have the kidnapping of human thought by the totalitarianism of the Great Quantitative Oracle; people do not need to think.” That "kidnapping" is part of the appeal of AI. Human attention is limited, and some problems can probably be solved better by computers anyway. Part of the field of UX is streamlining decision-making for human users. If people literally "do not need to think" and if computers are doing it for them, there's no problem, or at least it's not the "worst" possible problem. It's computers relieving us of unnecessary cognitive labor.

Another big problem is an inconsistent treatment of the significance of feeling and ethics. She says machines are "devoid of compassion, feelings, empathy, and life," "will never make decisions based on the emotional and unknown," lack "gut instinct" when judging people and deciding how to treat them fairly, and "cannot inspire us like human teachers can." She adds that they "cannot read your mind to figure out your intentions, desires, and goals and your understanding about how to satisfy them. They don’t take the route that goes by the river because there’s an absolutely beautiful sunset this evening which will ease your stress." She doesn't provide evidence to support these assumptions. Then, she makes a small verbal slip while talking about how AI will transform the job market for humans: "Most US college graduates have a meaningless degree that will not land them a job" [emphasis mine]. Exactly what degrees are meaningless? She doesn't say. I hope it is not a dig at the humanities, especially since the humanities teach sentiment, morality, and critical thinking which are exactly the things she insists will become increasingly valuable because she believes they will never be automated, as when she brings up the common assessment that "the great differential of Albert Einstein’s physics in relation to that of Isaac Newton is due to the fact that Newton was not very versed in philosophy....Some of the biggest challenges imposed by technological advances are philosophical in nature, meaning good technologists must absolutely never leave the humanities aside."

The value of this book is that it gives a snapshot of where we are right now, as of its 2017 publication, in the field of rapidly growing AI capabilities, but it doesn't get at all the corners of the philosophical debate and misses the mark on some of those points. It is more of a motivational business book.
116 reviews1 follower
November 14, 2019
Overall I found this book interesting and a worthwhile read. It really feels like two books in one. The first half covering different aspects of AI and how it is and will impact various industries and society as a whole (which I found really interesting). Then around halfway it shifts focus to leadership (to the point I wondered if I was reading the same book) with a focus on Emotional Intelligence and a sprinkle of leadership qualities important in a changing AI landscape (I found this section less interesting personally).

I dropped a star because of a couple of reasons:
- The frequent stock images of robots and “business people in meeting” etc add no value (there was maybe a couple of diagrams which actually were relevant).
- Some sections came across as rambling where you have quick fire sentences that only loosely tied in with where things were at. Its almost like the author went “hmm, I have all these left over dot points on this topic I’m just going to smash them into a couple of paragraphs and chuck them in here”.
- Granted the leadership topic was less relevant to me, I think someone may get more out of this from a book covering the specific topic. For example, the intro to Emotional Intelligence was okay, but it felt like many of the leadership sections were lost and wandered into the wrong book with only a nominal connection back to the main topic of AI.

However, I found all the coverage on AI technologies both fascinating, exciting and scary. The author mostly positions this as “this is happening”, some developments appear a bit science fiction or further away, but either way it gives us a lot to think about. I really liked how the author covered not just technology developments, but also provoked thought on our responsibility of the future and ethical considerations. It’s a great read from perspective of answering the question “What is AI at a high-level, what are the latest developments, how will it impact the future of society and leadership?”
Profile Image for Samantha Nowatzke.
710 reviews4 followers
July 20, 2022
Published in 2017, parts are already outdated but decent overview of how AI can (and maybe should) replace some jobs and a look at the types of jobs that need humans with the capacity for empathy and compassion. I cannot advocate enough for people managers to 1) remain human and 2) be able to exhibit compassion and empathy. Too many people become managers because they are good at the work without having the necessary skills to manage (or better yet lead) other humans effectively.
Profile Image for Elina.
56 reviews
September 14, 2018
Author described different perspectives of automation and sides what not always is considering while talking about this topics. All of this make lot of takeaways and thought about what should consider while thinking about automation. AI will play important role in the future question will be ready for this or not.
24 reviews1 follower
July 7, 2020
Very good book, full of information about the field and explaining a lot of concepts and events. It is starting to not be as actual as one would want, but it is a good introduction. The leadership part was quite small and not that interesting or innovative. It did give a few good in-context pointers, that I really appreciated.
Profile Image for Sonal Goel.
1 review21 followers
August 6, 2020
Overall an interesting read. It is not that we are not aware that AI is the future. But to have a reality check that soon we might be losing out our jobs as most of the tasks would be AI driven is scary. Yes it is an eye opener thing and the book talks about the areas that we need to focus on in order to sustain the growing market.
Displaying 1 - 7 of 7 reviews

Can't find what you're looking for?

Get help and learn more about the design.