The Turing Point

THE TURING P01NT | ISSUE #1

Podcast
Dictionary
History
Authors
Published

March 15, 2022

Professor O'Sullivan ​Director of CRT AI

Professor O’Sullivan ​Director of The Centre for Research Training in Artificial Intelligence

Director’s Message 

Welcome to the first edition of the CRT-AI Newsletter. It is our pleasure to announce, this is The Turing Point. 
Our goal is to keep you informed of our latest developments and news and provide you with valuable information during your PhD journey. 
CRT-AI’s newsletter committee is comprised of students across cohorts one through three. There are six committee members, and the advisory committee is composed of the six Co-Directors of the CRT-AI.

Whether people realize it or not, AI is now part of their daily lives, not a piece of fiction. A number of industries have become prominent for adopting artificial intelligence, including high tech and telecommunications, financial services, and healthcare. 
The growth of AI Corporate Investment worldwide, due to Covid-19 Pandemic, has accelerated in areas such as biotech, and the biggest increase has been in healthcare and pharma, and it appears that AI hiring, investment, and adoption have increased globally.
The training of PhDs in this area is necessary to better meet the needs of the sector, improve efficiencies, and improve the future lives of millions of people. 
Last but not least, spring is approaching, and we all will be able to get up and moving again with plenty of fresh air around after a long period of hibernation. As you pursue your PhD, I wish you good health, happiness, and success. 
Please contact the newsletter committee through the CRT-AI Slack workspace if you would like to donate your time. 

Ace Your PhD Life With Soft Skills 

Soft Skills Toolkit

Soft Skills: What are they? 

Soft skills are core skills sought after in every profession. Which soft skills are relevant for your career? And which ones should you focus on developing?

Read More 

Communication with your supervisor

Your supervisor has a vested interest in your success. Set the right tone and communication style when you meet with them.

Read More 

Waiting for the motivation fairy

It’s easy to give in to procrastination — but Hugh Kearns and Maria Gardiner offer some tips for getting your drive back.

Read More 

Escape Your Chair challenge

Escape your chair by signing up to this challenge

Read More 

Find Your CRT-AI Science Buddy

Fancy a collaboration with your CRT-AI mates? You have created that amazing tool but nobody knows about it? You have spent hundreds of hours gathering this incredible database but you are the only one using it? We have your back!

The CRT-AI’s Padlet now provides you with a nice place to advertise your work and find your AI soulmate <3. Whether you are looking for a collaboration or have a nice database to share, you can publish a post with everything you want people to know. If it isn’t your case yet, remember to save this Padlet to your favorites; you could be surprised by how inspiring this mood board will soon be! Here we welcome every single one of our members and want everyone to feel comfortable. You can therefore post by yourself or by filling in our special form!

Find your buddyor fill the form and we will do the job

featured article

BlueDot: AI Goes Viral

by Yanlin Mi

Before the COVID-19 pandemic swept the globe, Artificial Intelligence (AI) was already being used in almost every industry. In the global fight against the COVID-19 pandemic, it has completely entered the human living space. A new generation of information technology, such as artificial intelligence, has improved human cognition and disease understanding in the fields of medical care, health, and public health.
AI and big data predictive modeling are improving and becoming increasingly more accurate, implying more accurate disease spread projections, and more time to produce more effective treatments. 

BlueDot is a AI data-driven public health risk assessment firm that specializes in tracking, finding, and conceptualizing the spread of infectious illnesses. It accomplishes this by combining (AI) and Natural Language Processing (NLP) techniques to process data from a variety of sources, including national statistical reports from various regions, global news media, global airline ticket data, population density data, global infectious disease alerts, climate reports, and insect vector and animal disease repositories. These datasets are used to train AI models, which allow their systems to deliver warnings in real time and on a consistent basis. This makes it possible to conduct “automated infectious disease surveillance”.

Indeed, BlueDot detected the COVID-19 pandemic in Wuhan on December 31, 2019, 6 days before the US Centers for Disease Control and Prevention (CDC) and 9 days before the World Health Organization (WHO) issued a warning. It was among the first in the world to identify the emerging risk from COVID-19 in Hubei province. 

It is through the daily analysis of 100,000 articles in 65 languages with keywords related to pandemic diseases, animal diseases, public health, and global flight dynamics, which BlueDot uses to track the flow of infected populations by analysing a global database of airline tickets in order to predict the next cities to be infected and inform its clients of potential outbreaks and the spread of infectious diseases. 
Yet, BlueDot’s application is not restricted to Covid-19, and six months before the official report in 2016, BlueDot had predicted the spread of Zika virus to Florida. In 2014, Bluedot  had also correctly predicted the danger of an Ebola virus outbreak in West Africa. BlueDot’s COVID Data Suite delivers bespoke, near-real-time intelligence to governments, hospitals and airlines, to track COVID-19’s movements, and states that the technology can also be used to track disorders including meningitis, yellow fever, and anthrax.

Read more

Turing Point Podcast

The inaugural Turing Point Podcast! Cathy Roche moderates a lively conversation with Dr. Dave Lewis and ​Dr. P.J. Wall about AI Ethics and how AI technology can have a positive impact!

Artificial Intelligence Dictionary

Coming to your AID (AI Dictionary)

by ​Cathy Roche

Artificial Intelligence

/ɑː(ɹ)təˈfɪʃəl//ɪnˈtelɪdʒəns/

As a term, Artificial Intelligence (AI), refers to computer systems that perform functions and tasks, such as facial recognition, once thought to be exclusive abilities of living, intellectual beings.  Computer systems can be designed to achieve goals similar to human activity (e.g. learning and reasoning). AI can also be optimised to exceed human competencies, including identifying variables that have the greatest impact on an outcome. While the definition of AI has endured over time, examples of such technology have varied as the ability of computer systems to simulate human behaviour and thinking has progressed. A computer’s ability to play chess was once considered an example of AI, now Artificial Intelligence is used to describe smart assistants and self-driving cars.

Machine Learning

/məˈʃiːn/ /ˈlɜːnɪŋ/

A branch of Artificial Intelligence, Machine Learning (ML) is a technique based on systems learning from data. ML describes how a computer system teaches itself to identify patterns within the data and can make decisions with little human input. As the machine learns automatically from past data, there is no need for explicit programming. Through ML, data is analysed through the automated building of analytical models: the system adapts its algorithms to improve the accuracy of pattern detection simply based on the data observed and using its own statistical tools. Just as humans make decisions based on our previous experience, ML enables computers to keep learning and adapting by inputting new data. For example, look at how humans learn about dogs. Our initial dataset may include seeing family pets, dogs on TV or looking at pictures online. From this dataset, we feel we are able to identify whether future animals with a tail, four legs and of a certain size are in fact dogs. What happens when we get data about dogs that is different from our initial dataset e.g. a fox is not a domestic dog?  What we do is refine our thinking about dogs so that in future, we will be able to determine more accurately what is a dog when we see dogs of a different height and weight.

Neural Network

 /ˈnjʊər(ə)l/ /ˈnɛtwəːk/

Structured in a way similar to how neurons are organised in the brain, artificial neural networks (or neural networks) are a set of algorithms designed to recognise patterns. Consisting of layers of nodes,  each designed to behave in a way comparable to a neuron, neural networks are used in Deep Learning.  The first layer is the input layer, then there are hidden layers and lastly, the output layer.  Each node performs a calculation and this is then passed to other nodes in the neural network. Within this model, the neurons (which are really perceptrons) receive the input, apply an assigned weight and as the model is trained, these weights can be adjusted to make the model more accurate. Neural networks can be used to classify and cluster and there are several types, such as recurrent or convolutional. The usability of a certain type of neural network depends on the task or application being undertaken.  

Bias

/ˈbʌɪəs/

AI systems use machine learning algorithms to perform their function. These algorithms train on data and can often produce results which reflect erroneous assumptions in that data. This is because the data has human prejudices, both conscious and unconscious, encoded within it. Bias can take many forms, such as gender discrimination, ageism or racial prejudice. While AI systems reproduce human biases, they also scale them: perpetuating bias in decision-making to a level that could constitute algorithmic discrimination. An AI is considered biased if the decisions it makes penalise or reward groups of people for reasons that are prejudiced.AI bias can stem from several sources, including

  • Low quality of the data used for training models. For example, by using an AI-based recruiting tool in a predominantly male company or industry. Training on historical employee data, it is likely that the AI tool will replicate gender bias.

  • How training data is collected and or processed. For example, a data scientist may exclude important entries or end up under- or over-sampling which can penalise minority classes.
    Datasets that are not robust.

  • For example, a dataset which fails to distinguish between different groups could lead to decisions which treat people in a uniform way, without considering important differences.

  • Weak model validation. For example, a model may perform very well on the training data but not be generalisable.

  • Implicit bias: For example, where human biases are already encoded in the datasets.

Ethics Washing

/ˈɛθɪks/ /ˈwɒʃɪŋ/

Also known as ethics shopping or ethics theatre, ethics washing refers to when an organisation pays lip service to ethics to signal to the public, shareholders and policy-makers that it is committed to responsible AI development but does little to ensure it happens in practice. In this way, an organisation can appear to take AI ethics seriously but in reality it is window dressing. The ethical policies and frameworks adopted are mainly for show and do not create any real accountability or obligation on the organisation.  Ethics washing is used to avoid external regulation and official scrutiny by appearing to have voluntarily adopted broad ethical obligations. The problem of ethics washing is that it masks whether organisations are actually making efforts towards developing ethical AI systems.

COMPETITION

LOL My Thesis

Tired of that too serious life? Take a step back and tell us about that laughable work you’ve been exhausting yourself on! Here it can be about your thesis, a project you have been working on, your Master’s or Bachelor’s dissertation, etc. It does not matter. All we want you to do is to provide us with a new title that highlights the absurdity of your work.https://lolmythesis.com/ is a rich bank of hilarious examples, and we might have stolen their idea. Yet, we know that misery does not only affect others, and now we want to laugh at our CRT-AI community. Come up with your best line and submit it here! The best ones will receive a chocolate bar and be featured in our next issue!

Send your entry

see examples

History of Neural Networks

by Sharmi Dev Gupta and Lavanya ​Vinod Pampana

It all starts with a fundamental question, “do machines think?” rather, “can we make machines think?”. If we could replicate the human brain in the best possible way, we might just achieve it. Make the impossible possible. The next question that entails in replication is more philosophical, “what is intelligence”? Is it innate to living organisms or is it something that can be synthesized? 

Read more

In this quest for making machines think, the neurophysiologist Warren McCullough and mathematician Walter Pitts in 1943 worked together to understand ‘how neurons work’. They modeled a simple neural network using electrical circuits. This was the first step in the invention of an artificial neuron. The model’s simplicity was a major limitation since it only accepted binary inputs, incorporated threshold step activation functions and it did not account weights. In 1949, Donald Hebb proposed when two neurons fire together then the connection between two neurons is strengthened. It was ascertained that this is one of fundamental operations for learning and memory.  In the 1950s, Nathanial Rochester, from IBM Research Laboratories, tried to simulate a neural network on IBM 704. He was the chief architect of IBM 701 computers. His group was assigned to pattern recognition, information theory and so on. In 1956, Nathaniel Rochester (IBM Laboratories) along with John McCarthy (Dartmouth College), Claude E.Shannon (Bell Telephone Laboratories) and Marvin L. Minsky (Harvard University) formed a working group and submitted a proposal for a workshop/conference to the Rockefeller Foundation.

This workshop took place in July, August 1956 and is recognized as the official birthplace of Artificial Intelligence. Attendees came from diverse backgrounds: electrical engineering, psychology, mathematics and more. The proposal for the workshop is shared below, which gives a broader definition of Artificial Intelligence:

“An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. … For the present purpose the artificial intelligence problem is taken to be that of making a machine behave in ways that would be called intelligent if a human were so behaving.” [9]

“The attendees at the 1956 Dartmouth conference shared a common defining belief, namely that the act of thinking is not something unique either to humans or indeed even biological beings. Rather, they believed that computation is a formally deducible phenomenon which can

be understood in a scientific way and that the best nonhuman instrument for doing so is the digital computer (McCormack, 1979).” [9]

Marvin Minsky, Claude Shannon, Ray Solomonoff and other scientists at the Dartmouth Summer Research Project on Artificial Intelligence (Photo: Margaret Minsky) [9]

An image of the perceptron from Rosenblatt’s “The Design of an Intelligent Automaton,” Summer 1958 [8]

It was period when everyone was trying to implement a neural network for a real-time application. Though the traditional Von Neumann architecture dominated the computing scene then, John Von Neumann worked on imitating neural functions by using telegraph relays or vacuum tube in 1957 and was not quite successful. Frank Rosenblatt developed the first perceptron by altering McCulloch-Pitt’s neuron. This perceptron follows the Hebbe’s rule, that is weighting the inputs, and was the building block of formation of neural networks. In July 1958, IBM 704, a 5-ton computer, the size of a room was fed a series of punch cards. The computer taught itself to distinguish the cards marked on the left from the cards marked on the right in 50 trials. This was the first demonstration of perceptron, the neural machine capable of delivering and conceiving the original idea which is still in use. He discussed the perceptron in detail in his 1962 book, Principles of Neurodynamics.

Around the same time, in 1959, Bernard Widrow and Marcian Hoff (Stanford) developed models called ADALINE and MADALINE. For Stanford’s love of acronyms, the names mean Multiple ADAptive LINear Elements. ADALINE could recognize binary patterns, if it was reading streaming bits from a phone line, it could predict the next one. MADALINE was the first neural network applied to real world problems that eliminates the echoes on phone lines using an adaptive filter. Though the system is as ancient as air traffic control systems, it is still in use. 

Marvin Minsky and Seymour Papert in 1969, proved that the perceptron was limited in their book Perceptrons. At conferences, Minsky and Rosenblatt publicly debated the viability of perceptron. 

“Rosenblatt had a vision that he could make computers see and understand language. And Marvin Minsky pointed out that that’s not going to happen, because the functions are just too simple.” [8]

The problem with Rosenblatt’s perceptron was it only had one layer and while modern networks have many layers. The real contending problem was that the model provided provision for generating AND, OR, NAND and NOR gates but could not generate the XOR gate. This led to a winter period in neural networks where research progress halted in this direction for a few years. 

The thawing of this frosty AI winter began in 1982 when Jon Hopfield published what is known today as the HopfieldNet at the NAS(National Academy of Sciences). Around the same time, the field got its much needed competitive boost when Japan announced its 5th  generation effort on the research of Neural networks. This helped US research institutes pitch for larger amounts of funding and in 1985 the American Institute of Physicals established  the Neural Networks for Computing annual meeting , followed by the Institute of Electrical and Electronics Engineers (IEEE) in 1987.

The year 1997 was another milestone moment. A recurrent neural network framework , LSTM was proposed by Schmidhuber and Hochrelter. They introduced the Constant Error Carousel units, to deal with the vanishing gradient problem. 

In 1998, Yann LeCun published a seminal paper on the Gradient - Based Learning applied to document recognition.  Gradient based learning provided the framework to build systems, which cater to learning architectures that can handle high dimensions of input, high degree of variability and the complex non linear relationships between the inputs and outputs.

Summaries of some of the important academic  papers are discussed below: 

  1. Attention Is All You Need

The authors propose a simple network architecture called the Transformer based on attention mechanisms, dispensing the need of using complex recurrent networks or CNNs by relying entirely on self attention. The paper describes the architecture of the model with an encoder decoder structure where each encoder has sublayers, one a multihead self attention mechanism and another a fully connected feed forward network. The decoder is composed of a stack of 6 identical layers where the sublayer performs multi head attention over the output of the encoder stack. The application of attention is done in 3 different ways:

  • In the “encoder-decoder” attention layers mimic the typical encoder-decoder attention mechanism in sequence to sequence models.

  • The encoder contains self attention layers where all the keys,values and queries come from the output of the previous layer in the encoder. All the positions in the encoder can attend to all the positions in the previous layer of the encoder.

  • The self attention layers in the decoder allow each position in the decoder to attend to all positions in the decoder. 

The paper compares the self attention model to the recurrent and convolutional networks and comes to the conclusion that self attention layers are faster than recurrent layers in terms of computational complexity. The complexity of a separable convolution is equal to the combination of a self attention layer and a point wise feed forward layer i.e. the approach taken in the paper. 

 This paper achieves a new state of the art when it comes to translational tasks. 

The code is available at https://github.com/tensorflow/tensor2tensor.

 2. Bandwidth Prediction and Congestion Control for ABR Traffic Based on Neural Networks

The paper uses Back propagation Neural Networks for congestion control in ATM networks for predicting the bursty available bandwidth for ABR traffic effectively and to force the queue level in the Buffer to a desired region. The fairness of this model is achieved through the fair algorithm. 

 3. Using Artificial Neural Network Modeling in Forecasting Revenue: Case Study in National Insurance Company/Iraq

The paper aims to forecast the insurance premiums revenue of the National Insurance Company between the years 2012 to 2053 using Artificial Neural Network based on the annual data available for the insurance premiums revenue between 1970 to 2011. The authors used a Neural network fitting tool to help select the data, create and train a network and evaluate its performance.The approach is used by taking the annual investment income of National Insurance Company as an independent (Input) variable. The experiments show that the best architecture for fitting a neural network as one input vector, five hidden layers and the output is one vector (1-5-1), the ratio of increasing the insurance premiums revenue is approximately 120% for the period between 2012 and 2053.

 4. Neural Network Approach to Forecast the State of the Internet of Things Elements

The aim of this paper is to use neural networks to predict the states of the elements present in an IOT based architecture. The proposed model/ architecture of the neural networks is a combination of multilayered perpetron and probabilistic neural networks. Authors analyze the performance of this model based on accuracy and efficiency of the model. The combined ANN network helps realize a forecasting and monitoring model for the Internet of Things. This results in reduction of IOT administration costs and emergency resolutions. 

 5. AnyNets: Adaptive Deep Neural Networks for medical data with missing values

The paper introduces a novel class of adaptive deep neural networks called AnyNets which is designed to remove the need for imputation for missing data for patient records in medicine. This is because a large number of patient records contain incomplete information and measurements which are then filled up using default values which causes bias and limits generalization. The paper goes on to process various kinds of input values through the medical datasets under both supervised and unsupervised learning and achieve better results than the electronic medical records or registry.

References

  1. History of the Perceptron
  2. Neural Networks History: The 1940’s to the 1970’s
  3. Brief History of Neural Networks
  4. A Concise History of Neural Networks
  5. Did Frank Rosenblatt invent deep learning in 1962?
  6. Professor’s perceptron paved the way for AI – 60 years too soon
  7. The Birthplace of AI
  8. Understanding Basics of Deep Learning by solving XOR problem
  9. Artificial Intelligence (AI) Defined
  10. A neural networks deep dive