Michael on Linked

Tuesday, September 3, 2019


When I was a youngster in the latter part of the 1940s, my siblings and I would spend holiday time at a relative's farm. The only communication with the outside world was a telephone with the mouthpiece attached to the body of the phone and the receiver at the end of a cord. As the farm was miles from nowhere, the telephone was on a party line, a circuit shared by two or more subscribers.

As I remember, there were approximately a dozen subscribers in the party, each distinguished by a particular phone ring. When a phone call had to be made, the caller would crank a handle on the side of the phone a couple of times to reach an operator. The caller would then ask the operator to put the call through to a particular number. There was little privacy as any party member could listen in on any other party member's call.

Seventy years have gone by and the world is now connected with digital communication devices called smartphones. These phones incorporate artificial intelligence in many ways; voice recognition, facial and fingerprint recognition, camera object recognition and digital assistants with two examples being Siri and Alexa.

The words artificial intelligence or AI were first used by John McCarthy one of the "founding fathers" of artificial intelligence. He together with Alan Turing, Marvin Minsky, Allen Newell, and Herbert A. Simon supported the first academic conference and organized the famous Dartmouth conference in the summer of 1956 on the subject matter.

 However, the journey towards understanding whether machines could actually think for themselves began sometime before that date with the invention of the programmable computer in the 1940s. The workings of this new machine were based on mathematical reasoning and the deliberations and research behind it inspired some scientists into discussing the possibility of establishing an electronic genius, thus, an artificial intelligence.

Children born in the 1950s are now in their sixties. AI has been around for approximately fifty percent of its lifetime. Over the years, this generation and subsequent generations have adjusted to AI being commonplace in their lives; particularly those who live in the more advanced technological societies. Children born in the 21st century are born into a world where AI is integral to their daily work and personal lives. Voice-activated smartphones, AI in toys, GPS devices for family vehicles, smart homes, banking, and the internet are of the few of the applications most commonplace in the lives of children today.

Babies born in the 21st century no more think of AI as a wonder as they grow into childhood than children born in the '30s, '40s and '50s thought about radio. From the time they are old enough to grasp objects, they are subject to a broad variety of AI applications. AI is as much a part of kids lives today as radio was in the lives of children in the 1940's and 50's and television in the '70s and '80s.

As self-driving vehicles become more commonplace, children will be driven to school in autonomous school buses. Children in small rural communities will be taught by robot teachers or over the internet by intelligent bots. They will grow up in smart homes where instead of physically turning on and off lights, rooms will light up for them as they enter and darken when they leave. When they arrive home from school, AI technology will recognize them and open the door to the family home and lock it behind them where they will be greeted by their friendly AI home care bot. Children are now interacting with smart toys, refrigerators, entertainment centers, heating, lighting systems; all a part of their everyday life. And, AI researchers are now realizing that by paying attention to how children learn and process information, they can gain valuable information about how best to develop machines that learn.

Young adults and children born surrounded by digital technology for the past five years are defined as of digital natives. This group is growing exponentially. By 2025, digital natives are expected to comprise as much as 75% of the global workforce. With their comfort and knowledge of AI and machine learning technology, digital natives are now having a tremendous impact on the worldwide business landscape.

As well as the soaring number of digital natives there will be a predictable decrease in the price of digital technologies, an increase of their geographical extension and a drop in the age of users. In the most wired areas, it is now commonplace to see small children in different parts of the world watching the same cartoon or interactive children's program on their parents’ laptop, smartphone or tablet at a beach, a park or on a plane or bus, providing downtime for their parents. This is one aspect of using artificial intelligence and digital technology as a mobile child caregiver. And this is only the beginning.

Imagine children, immersing themselves into virtual reality worlds while robot nannies supervise them. Having AI friends or artificial teachers will become commonplace. As digital technology passes over our technological doorsteps, current and future parents must familiarize themselves with the latest digital tech aimed at kids to try to prepare for the impact AI will have on their lives. It is somewhat difficult for Millennial parents for children growing up with AI technologies simply take them for granted. Pre-millennial parents or digital immigrants as they are sometimes known as, still remember the time when portable, WIFI-enabled technology was the stuff of science fiction.

New parents of today and ongoing into the future need to teach their kids how to survive in the digital as well as the real world. The first generation to grow up in the 21st century will never remember a time before smartphones or smart assistants. They will likely be the first children to make riding in self-driving cars commonplace. As well, they will become the first human beings whose health care and education could be turned over to intelligent machines.

Futurists, demographers, and marketers are now beginning to agree on the specifics of what defines the next wave of humanity to follow Generation Z or Millennials. The term Generation Alpha now denotes children born into a fully-realized digital age. The term applies to children born since 2105. By 2025, Generation Alpha will account for 2 billion of the global population. These children are considered to be the most technologically savvy demographic to date.

Generation Alpha is born into the world of smartphones and tablets. They don't know or can't imagine life without them. A new generation of children's toys with personalities powered by artificial intelligence will give kids more than holiday playthings. Unlike electronic pets of the past, such as  Furby and Tamagotchi that sparked holiday crazes in the late '90s, the new robotic drones and droids on store shelves are comprised of genuine AI technology. They include face recognition; they respond to voice commands with reasonable consistency and have very sophisticated AI processors. They are microprocessors and computer systems designed specifically as hardware acceleration for artificial intelligence applications in the areas of machine learning, neural networks and machine visualization.

A very gifted little AI toy with a mind of his own is Cozmo. He is a real-life robot of a kind previously only seen in science fiction movies. This little toy has a one-of-a-kind personality that evolves the more a child plays with it. It will even nudge its owner to play and keeps kids constantly surprised. Cozmo and its more advanced cousin, Vector have been developed by Anki, a company founded by three graduates of Carnegie Mellon's robotics Ph.D program. These little tank-like robots are so full of personality that even AI experts have to take an educated guess at how intelligent their artificial intelligence is. As advances in AI progress, technology will progress from using pre-programmed responses to truly showing adaptive learning in its responses.

Parents will soon have to ask themselves the following question: Do we really want toys to grow with our children? The answer is yes; parents should. Not only will it be less drain on parents' wallets, but the new AI toys will incorporate optimal learning information because they can tailor their entertainment and messages to the level of skill a child exhibits.

Educators have long known that knowledge can be more lodged in a student’s brain when the student has to explain what they have learned to another student. In other words; peer learning. This was not anticipated in interactions with intelligent machines and is leading to a whole new field of development, not only in children but in machines that will learn for themselves.

Groundbreaking research in children's education is now indicating that a robot toy is more valuable to a child’s education, not as a teacher but as a student. In other words, by creating an intelligent robot that purposely makes mistakes and prompting the child to correct does more for the child’s education than lecturing them. 

Educational AI researchers propose that AI educational toys always need to display slightly less intelligence than their child user so as the child “teaches” the robot, the robot steps up its game and continuously challenges the child.

Where AI and the education and entertainment of children go from here, nobody knows. The only certainty is AI is here to stay. 

Thursday, August 22, 2019


The following can be added to the ending of the heading of this post:  "to emails, letters, phone calls, job applications, charitable inquiries, sponsorship support and anything else you can think of."

As you can see from the above, the title to this post would have been way too long and meandering but I am sure you get the point.

The question is: why do so many businesses who spend thousands if not millions of dollars on advertising what great companies they are, don't have corporate policies on responding to correspondence? Some companies take great pride in replying to all inquiries regardless of the reason for the inquiry (job application, business inquiry, sales call, etc) within 48 hours. A majority of businesses do not. Why don't they? Because they do not care! It's as simple as that. They either consider themselves to be too big to fail or feel they have little in the way of competition for their products or services so don't really care if they alienate some people or, they work on the principle; lose a client or customer today, find another one tomorrow. 

What these companies fail to recognize is that every email they fail to respond to, every letter they ignore, every phone message they never return is from a prospective customer or client. I personally make it a policy never to shop or buy from a company that does not respond to correspondence. Not because I feel that my boycott will make a difference to the company bottom line, but because I am refusing to support an arrogant company with no respect for the general public. 

If during a conversation at a party or a backyard barbeque and the name of a company comes up that I boycott, I make no bones about the reason I do not buy from them or use their services. I work on the principle that if one person tells five people then each of those five people will tell five people and so on and eventually it will make a difference.

There is no reason for a business not to respond to correspondence. We live in the most connected age in the history of the world with AI becoming more of a factor in everyday business operations so really, there is no excuse. The worst culprits are big business. These companies can afford to have a department whose sole purpose is to respond to communications. I'm not talking about the retention department. This is an entirely different matter. I am talking about a department to communicate with correspondents be they a job seeker waiting hear on an application, an existing customer making an inquiry or a complaint or an organization seeking sponsorship to name just a few. The official term is Customer Communication Management or CCM. Let's call it the Correspondence Liaison Department. This department's job is to make sure all non-personalized communications are responded to and to liaison with the particular department to where the query is directed and come up with a satisfactory response for the person behind the communication.

Maintaining high standards in responding to incoming communications to your business is a sign of professionalism. Poorly structured and untimely responses or ignoring some communication altogether whether via email or postal mail makes customers feel underappreciated and undervalued and can result in lost business. 

Consequently, having a correspondence etiquette policy for responding to mail and email is a key component of communications strategy for any business, small or large.

Wednesday, August 21, 2019


A very common practice among companies who advertise job vacancies or are looking for contractors or have positions to fill either as freelance or remote is to include in their advertisement the following notification: "Unsuccessful candidates will not be notified" or "only shortlisted candidates will be contacted" or words to that effect.

In today’s job market, the sheer volume of applicants for a single position can overwhelm an HR department. But for many unsuccessful applicants, the lengthy process of applying deserves a bit of quid pro quo. However, with today's technology, how difficult can it be, even for a small company to send failed candidates or applicants who did make consideration a brief email or text message.

After all, they’ve put the effort into meeting the criteria and possibly even fronted for a face-to-face interview. Yet often they wait in vain for acknowledgment or, rarer still, some constructive criticism that might help them in their next attempt. 

It was common practice in the pre-internet age for businesses, large or small to send a brief letter through the mail to failed applicants. With all companies now having access to the internet and software for mass emailing, why can't companies send a brief email to rejected applicants? Are they just too lazy? Too arrogant? Or do they just not care?  

The publishing business is the one industry that seems to have no problem in issuing rejection letters and in some cases, even offering some constructive criticism to writers submitting their work. If publishing companies, large or small can send out letters and emails, why cannot other businesses?

Wednesday, July 31, 2019

The Humorous Side of Artificial Intelligence

Humor is what makes humans special. When people try to teach machines humor, the results are at times laughable but not in the intended way. 

What makes humans laugh? Some occasions may be caused by contrary emotional states. Embarrassment, apology or confusion can cause nervous or courtesy laughter. Humans will laugh at jokes or will laugh at a play on words for instance, even though they may be very subtle. Everyone has a different sense of humor. What is funny to one person will go over the head of another. Laughter in some ways, is like human language and what may be hilariously funny in Japan will go over like a lead balloon in France.

Laughter bonds humans through humor. However, despite its prominence in our daily lives, there is little research on how and why we laugh.The study of humor and laughter and its psychological and physiological effects on the human body is called gelotology. The question as it relates to artificial intelligence is; can a sense of humor be taught  to machines?

Humor is a hidden language that we all speak but it is not a learned group reaction. It is more an instinctive behavior programmed by our genes and the societies we live in.  

Tristan Miller, a computer scientist and linguist at Darmstadt University of Technology in Germany says: “Creative language — and humour in particular — is one of the hardest areas for computational intelligence to grasp." 

Miller has analyzed more than 10,000 puns and called the experience torture. “It’s because it relies so much on real-world knowledge — background knowledge and commonsense knowledge. A computer doesn’t have these real-world experiences to call on. Up until recently, a robot or a computer could only know what it was told and could only draw from that knowledge.”

Great strides have been taken in trial and error learning in the science of artificial intelligence, this being one of the fundamental learning strategies employed by humans and animals. It is increasingly being used to teach intelligent machines boosting the flow of ideas between biologists and computer scientists. More studies in the trial and error approach could solve mysteries in animal and human cognition and help develop powerful new algorithms and therefore moving closer to AI being created with an ability to learn humor.

However, as humor is still somewhat  of a mystery in itself, can the trial and error approach be applied to developing humor in AI? Some scientists seem to think so. The following is a headline from Wired.com.  https://www.wired.com/story/comedian-machine-ai-learning-puns/?verso=true

The Comedian Is in the Machine. AI Is Now Learning Puns!
A researcher at Stanford University has created a pun generator that came up with the following groaner, all on it's own. 

"Why did the Greyhound stop? To get a hare cut". 

Her aim is to build AI that is natural and fun to talk to and that can crack jokes or compose a poem or even tell a compelling story. "But getting there," she says. "Runs up against the limits of how AI typically learns." 

Of course, a very common saying is a pun is the lowest form of humor but a machine has to start somewhere. Will AI eventually replace the Ricky Gervais's and Steven Colbert's of the world? Who is to say. 

"People have had some success in defining what would constitute humor," says Abhijit Thatte , Assistant Vice President of Technology and Practice Leader for Artificial Intelligence at Aricent, a global design and engineering firm. "But it has not been been codified yet."

As even full-time stand-up comics would admit, there is no magic formula to produce the perfect joke. Much of what makes us laugh depends on subtle factors such as context or body language. "Sometimes even we humans don't know why a joke is funny," says Thatte.

When it comes to an individual's funny bone, there has to be a really deep understanding of the world in which a person lives, how things work, how their society works and mostly how people in their society work. Humor is indicative of something that is really human and is also intelligent but in it's truly human form, currently outside the abilities of artificial intelligence.


Monday, July 29, 2019

Is Artificial Intelligence The Next Step in Human Evolution?

What is Artificial Intelligence or AI? Very simply, AI is intelligent machines that work and react like humans. Artificial intelligence can be classified as three different types of systems; 

Analytical AI only has characteristics consistent with cognitive intelligence such as thinking, reasoning or remembering using learning based on past experience to inform future decisions. 

Human-inspired AI has elements of cognitive and emotional intelligence and an understanding of human emotions and is used in conjunction with decision making.

Humanized artificial intelligence:
Humanized AI shows characteristics of all types of competencies (i.e., cognitive, emotional, and social intelligence) and is able to be self-conscious and is self-aware in interactions with others.

There are many areas of business, government and human entertainment that are well suited to the use of AI including but not limited to the following: 

  • Agriculture
  • Aviation
  • Education
  • Computer Science
  • Finance
  • Medical Care
  • Government
  • Heavy Industry
  • Mining
This list will continue to expand as  new generations of computers emerge and as the learning curve among the AI scientific community begins to flatten out. 

The science has not been around for that long. A handful of scientists back in the 1950's from a variety of fields (mathematics, psychology, engineering, economics and political science) began to discuss the possibility of creating an artificial brain. In 1956, the field of artificial intelligence research was then founded as an academic discipline.

There are a number of eminent scientists credited with founding of artificial intelligence science, foremost among them was Alan Turing, a young British mathematician who explored the mathematical possibility of artificial intelligence. Turing suggested that if humans could use available information as well as reason in order to solve problems and make decisions, why couldn't machines do the same thing? This was the logical framework of his 1950 paper, Computing Machinery and Intelligence in which he discussed how to build intelligent machines and how to test their intelligence.

However, before Turing could move further in this new science, computers had to change dramatically. They were at that time, essentially very smart calculating machines and while they could execute commands, they had no means of storing them. Some years later computer scientist and cognitive psychologist Allen Newell, political scientist, economist and sociologist Herbert A. Simon and systems programmer John Clifford Shaw all working at the Rand Corporation in Santa Monica, California, developed the Logic Theorist Offsite Link, the first program deliberately engineered to mimic the problem solving skills of a human being. 

They decided to write a program that could prove theorems in the propositional calculus like those in Principia Mathematica by Alfred North Whitehead and Bertrand Russell, a three-volume work on the foundations of mathematics written in 1910, 1912, and 1913. The first application of AI used in a practical manner was created by Joseph Weizenbaum in 1965 who developed ELIZA, an interactive program that carried on a dialogue in the English language on any topic. Weizenbaum, who wanted to demonstrate the superficiality of communication between man and machine, was surprised by the number of people who attributed human-like feelings to the computer program.

Artificial intelligence is becoming very good at many “human” jobs such as diagnosing disease, translating languages and providing customer service and as AI continues to evolve, there are reasonable fears among many people that artificial intelligence will ultimately replace humans in many jobs and occupations in industry and in the economy at large. 

Some AI scientists feel that is not an inevitable or even most likely outcome. While AI will radically alter how work gets done and who does it, the technology’s larger impact will be in complementing and augmenting human capabilities, not replacing them. Many developers of AI feel that with collaborative intelligence, humans and AI can actively enhance each other’s complementary strengths. What comes naturally to people such as humor is difficult for machines, and what is straightforward for machines such as analyzing terabytes of data very quickly is out of reach of the human brain. In our rapidly evolving world, regardless of whether it is business, education, industry, space exploration, quantum physics or medicine, all are going to require both kinds of capabilities.

The unanswered question at this point is; at some point in the future, will there be a emergence of AI and the human intelligence, therefore the next evolving step of the homo sapiens species and it's unique brain? To some, the answer is an unequivocal yes. 

At this stage of human evolution, there will be no turning back and the human race will be entering either a brave new world of human / machine collaboration or a world where humans evolve into machines. 

Thursday, July 18, 2019


Have you checked your mortgage rate lately ?  

If not, It's time to do just that.  These past weeks we have seen the 5 year fixed term rates drop to 2.69% for insured and 2.89% for  conventional rates. 

Are you paying too much ?  Even if you are facing an early payout penalty, it may make sense to refinance.  Sometimes a refinance can help pay other debts, to lower overall payments, or simply save you interest costs over the lifetime of your mortgage. If a penalty is involved, some math may be required to determine whether it is worth it to payout early.  Sometimes this analysis involves the risk of short term maturity  date that could result in a higher rate than today's 5 year guarantee.  

The best idea is to call me for analysis.  Once I help you calculate the cost benefits of an early refinance, my mortgage team is standing by to help you get through a smooth transition to your new financing.  

Considering a purchase ?  It's time to get ready with a pre-approval.  Pre-approval rate holds are good for 120 days while you sift through a buyer's market to find the perfect home.  

Contact Francine: francine.tracey@promerita.com

Monday, July 8, 2019


My wife and I recently drove from Vancouver to Saskatoon to visit friends. Neither of us had travelled through the prairies since the 1970's before we met; my wife by train from Toronto to Vancouver and me by car after crossing the border from North Dakota into Western Saskatchewan to drive back to Vancouver after an extended driving tour in the USA.

We were both amazed at the wide open spaces and the almost unlimited amount of uncluttered landscape which reminded us a great deal of northern Germany where we travelled extensively in 2017. The big difference between the prairies and northern Germany is the lack of renewable energy producing sources on the Canadian prairies. In Germany, we came across huge solar farms and wind farms, kilometer after kilometer. Farm houses, barns and most buildings had solar panels on the roofs. Towns had clusters of wind vanes on high ground near each town.
Saskatchewan grows many crop including canola and wheat that contribute a great deal to the Canadian economy, but much of the countryside we drove through lay fallow or was unused. While I understand the highest wind speeds in Saskatchewan are in the southwest, it seemed the unused land in the rest of this very flat land could be used to create massive solar farms. Given that Saskatchewan is the sunniest province in Canada, in all seasons, and boasting almost 3,000 hours of sunshine a year, building environmentally friendly, clean energy solar farms across the province would seem to be a no brainier.

Upon our return, I did a bit of research on Germany and came across the following information. Germany recently increased its renewable energy goal from 55 to 65 percent by 2030 to compensate for the decommissioning of aging nuclear and coal plants. Germany has been called "the world's first major renewable energy economy." Renewable energy in Germany is mainly based on wind, solar and biomass. Germany had the world's largest photovoltaic installed capacity until 2014, and as of 2016, it is third with 40 GW. It is also the world's third country by installed wind power capacity, at 50 GW, and second for offshore wind, with over 4 GW.

In Germany, the share of renewable electricity rose from just 3.4% of gross electricity consumption in 1990 to exceed 10% by 2005, 20% by 2011 and 30% by 2015, reaching 36.2% of consumption by year end 2017. As with most countries, the transition to renewable energy in the transport and heating and cooling sectors has been considerably slower.

Now however, more than 23,000 wind turbines and 1.4 million solar PV systems are distributed all over the country. According to official figures, around 370,000 people were employed in the renewable energy sector in 2010, particularly in small and medium-sized companies. This is an increase of around 8% compared to 2009 (around 339,500 jobs), and well over twice the number of jobs in 2004 (160,500). About two-thirds of these jobs are attributed to the Renewable Energy Sources Act.

Germany's federal government is working to increase renewable energy commercialization, with a particular focus on offshore wind farms. A major challenge is the development of sufficient network capacities for transmitting the power generated in the North Sea to the large industrial consumers in southern parts of the country. Germany's energy transition, the Energiewende, designates a significant change in energy policy from 2011. The term encompasses a reorientation of policy from demand to supply and a shift from centralized to distributed generation (for example, producing heat and power in very small cogeneration units), which should replace overproduction and avoidable energy consumption with energy-saving measures and increased efficiency.

Compare these statistics to Canada's record. In the electricity sector, hydroelectricity is the largest renewable energy source in Canada, accounting for approximately 60 percent of Canada's electricity generation. Other non-hydro renewable energy sources, such as biomass, wind, tidal and solar, contribute 3 percent, compared to Germany's 36% at the end of 2017.

The big issue with hydroelectricity is its impact on the environment due to the enormous amounts of concrete required. A major component of concrete is cement; the cement industry is one of the primary producers of carbon dioxide, a potent greenhouse gas. Concrete causes damage to the most fertile layer of the earth, the topsoil.

Solar energy systems have some certain negative impacts on the environment just like any other energy system, but solar energy is a lot cleaner when compared with conventional energy sources. Solar energy systems have many advantages such as being cheaper and not producing any pollutants during operation and, being almost an infinite energy source when compared with fossil fuels.
On a closing note, a common myth is that solar panels do not work during winter, but on the contrary, cold temperature will typically improve solar panel output. The white snow can also reflect light and help improve PV performance. Winter will only hurt solar production if the panels are covered with snow, a problem easily solved.

Saskatchewan, if not Canada could be a leader in this field climate and geography.

Wednesday, June 12, 2019


Executives, entrepreneurs and luminaries committed to advancing a transformation of the working world are redefining what it means to be a successful company in the 21st century.

Convening the exchange of leading practices, deepening research and recognizing those on the leading edge of forward thinking transformation, futurists aim to catalyze a global shift toward humanity in business, inspiring and enabling organizations to cultivate purpose-rich cultures that better serve their employees, customers and the world.

Its original research and year-round events, including at global forums such as the Clinton Global Initiative Annual Meeting and the World Economic Forum, are bringing together a diverse mix of business leaders, academics, scientists, entrepreneurs and storytellers to advance the science and execution of purpose in business.

More and more, customers are making their buying decisions based on an organization's stated aims and more millennial's are choosing their employer based on its purpose. Now that companies are armed with the impetus and the business case to transform around purpose, the discussion needs to shift from ‘why’ to ‘how.’ And this is where forward thinking planning and strategies comes into play. The old quote by Mark Twain: "To stand still is to fall behind",  is more relevant today than it has ever been before.