No ESI data
Skip to main content
world globe crystal on glass

Scholarship Announcement

As a growing company Surex is proudly Canadian and wants to encourage more Canadians in their studies, so we launched our own Scholarship program this year and had almost 100 submissions! We hope to grow this scholarship and award many Canadian students through the years. 

Today we are announcing the winner of our 1st Annual Surex Scholarship, on the topic of AI. We’d like to announce that the winner of our 2019 scholarship is Mr. Chinonso Obeta from Kelowna, BC! Thanks to his impressive essay titled “Automation is coming” – or is it? Dive into the upper limits of AI and how it could change our lives very soon” Chinonso Obeta has won and will be awarded $1000.00!

 We’d like to thank all who applied for our scholarship as we appreciate the time you took to write your essays and submit your ideas. As this is an annual scholarship, we encourage you to please apply again next year.

 For more details about our scholarship page, please visit www.surex.com/scholarship or check out this page

 The full essay by Chinonso Obeta is below:

 

“Automation is coming” – or is it? Dive into the upper limits of AI, and how it could change our lives very soon

Gently massaging his temples with his hands in frustration, the chess player made the seventh move of the match – a move that turned out to be his undoing. His opponent retaliated by sacrificing their knight, thus creating an unprecedented angle of attack. Subsequently, in as few as 11 moves, his enemy had built a position so strong that the chess player had no choice but to concede defeat – a concession that was tinged with accusations of foul play and cheating. At first glance, this is your run-of-the-mill, ordinary chess match. But the 2 players of the game were far from ordinary. Garry Kasparov –  a chess grandmaster, legend, and an individual widely considered to be the greatest chess player of all time – was unceremoniously defeated by IBM’s artificial intelligence supercomputer Deep Blue, which was the first artificial entity to defeat a chess world champion.

           The irate Kasparov argued to no avail that the AI was actually being controlled by a human grandmaster. He and his supporters based this belief on their opinion that Deep Blue’s playing style was too humanlike to have originated from simple code. The reality that Kasparov could not accept was that in face of his own emotionally charged behavior, Deep Blue persevered through its unwavering commitment to inhuman and immovable logic. IBM staved off accusations of foul play and postulated that Deep Blue was not being controlled by a human, and that the AI was also not specially adapted to Kasparov’s style of play. They had the foresight to collect data on nearly every imaginable chess move, incorporate the data into Deep Blue’s logic architecture, and subsequently relied on the AI’s ability to choose the correct course of action.

           In the absence of emotional “corruption”, an entity which operates entirely on logic, strategy, and foresight – three crucial components of human intelligence – can comfortably come out on top more times than not in an endeavor such as chess. Combined with the fact that Deep Blue was capable of processing a staggering 200 million positions per second – a feat that still has yet to be replicated to this day in chess software programs – the AI was actually a supremely formidable opponent. In fact, it could be argued that the fact that Kasparov could keep up with the AI was his greatest achievement.

           Today, AI has many modern-day applications such as Siri in Apple’s iPhone or algorithms in Amazon that predict potential purchases based on online behavior. Each and every single AI has the core goal of problem-solving, which it attacks using the aforementioned traits of logic and strategic behavior. Contrast this with the human method, which generally wins through pattern recognition. We rely heavily on our knowledge of tangentially related facts and experience that may lend itself to the task at hand, as well as our innate intuition and emotional temperament. In certain scenarios, this is the preferred method. Take for example the occupation of counselling. Let’s say that a woman has been in arguments with her spouse over something trivial. An AI therapist that lacks emotional logic would naturally see two solutions to her problem. The first and most preferable is that they stop arguing, as they see that the issue at hand is not important enough to argue incessantly about. In the absence of this scenario, the AI would tell the woman to leave the supposedly abusive situation, because what else is there that the woman can do to solve her problem? Conversely, a human counselor who draws on emotional logic and past interactions with other individuals would encourage the woman to communicate with her partner and get to the problem’s source. They would (hopefully) advise the woman and her partner to be in tune with their emotions and keep them in restraint – an AI cannot do this because it is simply built to operate abstractly. This problem presents a real dilemma for AI manufacturing. In a chess match, an AI is formidable because it doesn’t rely on emotional cues or nuance – cold, hard logic can be substituted for those two. But the real world does not operate on logic alone. Logic is not enough to manage the moods and emotions of a corporation’s several hundred employees. Logic is not enough to diagnose a patient with a disease. And herein lies two problems.

           The first problem: how can emotional processing be incorporated into artificial intelligence? To answer this, we must understand the purpose for emotions in humans. Emotions are the result of thousands of years of evolution in humans. Charles Darwin postulated that the objective of emotions is to help humans survive, and survival is necessary because we are alive. This leads to several interesting questions:

 

1)      Is it possible to provide artificial intelligence with emotions?

We can attempt to study the subconscious and uncontrollable interactions in the human body that both create emotions and occur because of them, then attempt to condense them into an algorithm and then incorporate it into a machine’s source code. The reality is that we are very far away from this. In order to move past this problem, humanity has created machine-learning systems, which effectively builds models and automates problem-solving by building and identifying vast libraries of data and patterns. On a smaller and more personal scale, your smartphone utilizes machine learning by, for example, using location data and timestamps to determine the most efficient routes to avoid traffic and shorten your commute to work. On a larger scale, machine learning may be utilized in an anomalous context to raise suspicions about outliers in large datasets which may be used to flag bank fraud, or potential medical issues in patients exhibiting certain symptoms.

 

2)      Assuming it is indeed possible to bestow emotions on AIs, how would it benefit humanity?

This is more controversial. In bestowing emotions on machines, would it simulate real interpersonal relationships? Perhaps an all-powerful AI would not seek to destroy humanity, as many people fear, due to compassion for us. Humorously, an AI could conversely have negative feelings or ill will towards humans.

 

3)      To reach its upper potential as an entity, will AI require emotional processing components?

This takes us into murky waters. It would be of tremendous value to an AI to be able to detect basic human emotions and modify its reasoning accordingly. This would be of use in the aforementioned AI therapist scenario. However, this would not change the core function of an AI; which is to sift through a large number of scenarios and possibilities and come to an effective conclusion and/or solution. Maybe it could do the same function with nuance, but at its most basic level, it would not change an artificial intelligence. This is because it doesn’t need to display emotions; rather, it need only give the appearance of doing so.

The second problem is processing power. Many people are entirely convinced that humans were surpassed long ago, in terms of processing speed, by machines. This is a premise that is unequivocally false. The fastest supercomputer at the time of this writing is the Summit supercomputer, which was built by the United States Department of Energy. Summit can reach a theoretical 187.66 petaflops, the highest amount ever achieved – one petaflop is equivalent to 1 thousand-trillion calculations per second, an astonishing amount. There’s no way that the human brain can reach that, right? Wrong. The human brain operates on the next higher order, and can achieve an estimated 1 exaflop, which is equivalent to 1 billion-billion calculations per second. Summit doesn’t even come within shouting distance of our brains. In fact, some researchers tried to match the processing power in 1 second from 1% of the brain. That sounds like an incremental amount, yet it took the K computer (the 4th fastest supercomputer in the world) 40 minutes to complete the calculations for just 1 second of brain activity!               So, why does this matter? Because the functions of AI are predicated on logic, reproducibility, predictability, probability, and mathematics. While these are excellent behavior parameters, they are supremely rigid – they limit AI to be specialized to only perform several core functions at a time. What makes the brain vastly superior is its ability to rewire itself in a feature called “plasticity” – neurons can disconnect and reconnect with others, a feat that even the most carefully constructed supercomputer is unable to accomplish. This ability manifests itself in several areas such as visual recognition, recovery from brain trauma, and physical and mental development. The brain has basically perfected machine learning because not only is the brain more efficient and capable of identifying and incorporating ways to compute, it has the raw power necessary to actually do it.

The 2 problems mentioned above are holding back AI from being widely incorporated into the systems of businesses around the world. Something that could potentially bypass these problems is the development of quantum computing systems. A classic computer utilizes the binary system (the numbers 0,1) to store data as bits. Quantum systems use quantum bits or qubits, which are superpositions of 0,1 bits in a quantum entangled state in which 0,1 exist at the same time. If this system were accessible, it would allow for nearly unlimited possibilities which would be massively consequential for AI. Qubits make it infinitely easier to compute probabilities for many different choices and is excellent at incorporating patterns and experience into its systems – this is not due to an exponentially higher processing speed. Rather, it drastically reduces the number of steps needed for computation.

If we could build a fully functional quantum AI, an endeavor that has escaped humanity thus far, there are massive implications on a lot of important industries. One excellent example is chemistry. Chemical reactions are quantum by nature, as they form highly-entangled superimposed states – states that a classic computer cannot properly analyze. However, a quantum computer would have no problems with evaluating even the most complex chemical reactions. This could be consequential in the pharmaceutical industry, as a quantum AI would be able to efficiently model chemical structures in potential drugs and evaluate their effects on complex biological systems and diseases.

Another example is codebreaking. Let’s say that you have a password for your online banking account that’s protected by a 128-bit encryption cipher, which is widely considered to be logically unbreakable and in fact has never been broken. Most brute-force programs utilize the simple, yet highly effective dictionary attack method in which it attempts all possible listings that are possible in the given order (such as a password known to have 14 letters and 2 numbers). It would take the most powerful classic codebreaker AI approximately 6.8x1018 seconds, or approximately 2,158,000,000,000 years, to crack a 128-bit cipher. Unfortunately, the observable universe has not been in existence for that long (approximately 13,799,000,000 years). With a quantum AI, this time reduces to just approximately 6 months. On the other hand, you could also get lucky in the first 10 tries!

Other sectors that quantum computing could be useful to include financial modeling, particle physics, and weather forecasting. In 2019, IBM constructed the IBM Q System One, a quantum supercomputer that is already available for commercial use. 

In conclusion, classical AIs would be supremely effective in industries such marketing, advertising, and data analytics. AIs could even be effective in the classroom as highly personalized AI tutors in areas such as math or science. These endeavors share the central requirements of logic and reasoning, which are necessary for optimal results. In other sectors such as philosophy, social sciences, and the humanities, AIs would fail due to its lack of emotional infrastructure which is necessary for success in those sectors.

Additionally, quantum supercomputers would thrive in sectors that have removed the human element and fundamentally operate on informational and scientific properties. Furthermore, they would be highly effective in systems that may be disrupted by the concept of randomness, as this is the fundamental aspect in which they thrive – after all, particles in a quantum entangled state are by definition, random.

chinonso scholarship surex winner

 

 

Our Reviews

Start an insurance quote now