• Insights

What is Artificial Intelligence?

Carter Meyers

6 min read

All Insights

Resurrected over the last several years, the term Artificial Intelligence (AI) and its reported manifestations have begun to appear more frequently in various forms and facets of our lives. Be it the chat bot through which we order our pants, the synthesized voice we hit 0 to ignore, or the texts we get when our credit card is used to buy flip-flops overseas, AI has emerged everywhere, and has surfed into our lives on an uncomfortably familiar wave that enamors, eludes, and conjures a “what is it, and what do I do with it?” feeling (yeah, “cloud,” I’m looking at you).

The History of AI

Used to describe a machine’s ability to “learn,” “solve problems,” and mimic other cognitive functions, AI has naturally evolved since its formalization in the latter half of the 1950s. Since then, it has expanded, spawned child disciplines, played both leading and supporting roles in various technologies and, like many ideas before it, seen its fair share of popularity and neglect. At times during the ’70s, ’80s, and ’90s, waning interest in AI, along with consequential budget cuts, led to an “AI winter,” a period during which limited funding and attention stifled the growth of AI. Prior to the mid-2000s, intermittent advancements and dedicated evangelists kept the AI pulse alive, but were unable to woo the masses and enough deep pockets to fulfill its purported destiny.

These days, however, things seem different. Having grown rapidly over the last decade and garnered more than $30 billion in investments, AI has become the new hip thing. With no “winter” in sight, we’re compelled to sort out just what AI is, if and/or why we should care, and why it’s made a comeback.

What is AI?

To understand AI, we must first place as our cornerstone the fact that it is not witchcraft, though I plan on claiming otherwise come budget time. At its core, AI is good old software, and, at the risk of detracting from its genuine significance, can only be as capable, well-meaning, and reliable as the humans who build it. Furthermore, AI software is functionally identical to all preceding software in that it will forever be the product of some human who’s written some code that sends some command to a computer by way of 1’s and 0’s — hold your deflation.

How AI is different comes down to how the software is written. Traditional software has always centered on and been typically limited to a comfortably explicit and granular approach to problems and computational instruction; if X equals Y, then do this, else do that. While AI software, as we’ve established, still contains these types of programmatic decision trees, it doesn’t stop there. AI builds atop these principles and takes a far more abstracted and conceptual approach to instruction and software goals. To varying degrees, AI also looks at historical input and output to spot trends and “learn” from its mistakes, thereby compounding its efficacy.

The inspiration for this kind of approach, and thus the origin of AI, came from AI pioneers who foresaw great potential in computers, but understood that writing these unique if/then/else statements for every morsel of every process, use case, and business model conceivable would be a real non-starter. Software development needed to be more efficient.

To better understand this concept, imagine software, heavily simplified for our purposes here, that’s been designed to determine whether a submitted picture is of a red bicycle. To build this, you’ll need to write programmatic instructions on how to recognize the specific shades of red you want to include along with the general shape of a bicycle, starting with its unique components (e.g., the seat, frame, pedals, handles and, of course, the tassels). Don’t forget, you’ll also need to account for optional and disproportionately sized components.

A more abstracted and efficient way to approach this would be to forgo the recognition of a specific object and instead write instructions to infer layers, group pixels as object silhouettes, and understand how shading can impact colors. You would then need to devise a method to store the results of that analysis in a textual and easily indexable manner. A human could then pass dozens, hundreds, or thousands of bicycle images through this software, the result of which would be a collection of text-based signatures that the software would then use to evaluate all future images passed its way. The human could then do the same for chairs, tables, boats, cars, and any other discernible “thing.” Make no mistake, this kind of development often requires more upfront effort, but the payoff is usually well worth it.

AI pioneers, along with their present-day counterparts, came to understand that greater conceptualization of tasks could make a single software version applicable to a much wider range of departments, industries, data types, and use cases. The potential of this altered approach, as repeatedly explained and extrapolated on, increased the ostensible distance between human hands (i.e., what humans tell computers to do) and what the software ultimately accomplishes. This led to a genuinely dramatic disparity between what computers were known to do and what AI promised to accomplish. This disparity, which still exists today as a result of the natural development of ideas, needed to be explained to the public and investors, and was ultimately given the satisfyingly sci-fi description we still use today: “artificial intelligence.”

It is for this reason that AI designations are incredibly subjective. If you know how the trick is done, it’s hard to convince yourself it’s magic. Deepening the ambiguity is the “AI effect,” the idea that AI is only what we’ve theorized is possible but have not yet accomplished, and that once it’s been accomplished and sufficiently explained, is no longer AI. If you subscribe to this notion, then AI becomes a sliding scale of computational achievement and understanding, and would be why many AI technologies you’ve used in the past no longer wield that label.

The AI Resurgence

Now that we have a better understanding of what AI is, let’s talk about its resurgence. The reason why AI now saturates our news feeds is, unsurprisingly, data. Lower storage costs, greater compute abilities, and a society intent on accepting every terms-of-service document ever created has led to an unprecedented explosion in collected data (e.g., correlatable dates, genders, pictures, Tweets, hobbies, favorite colors).

Responsible to shareholders and enabled by growing economies, consultants, C-level execs, and creative developers have been collecting terabytes of this intoxicating data every day and have set out to use every bit of it to enhance offerings, better understand customers, and predict trends worthy of a pivot. A great and topical example of this is the breadth of filters Facebook makes available to its advertisers.

The exponential growth of these data sets, along with the need to evaluate them more complexly and in a near-immediate manner, has revived AI, as AI has satisfied these needs well. This comeback has naturally led to new investments, lower barriers to entry, more specialized research, and the subsequent advancement of AI subfields like machine learning, natural language processing, and artificial general intelligence. While we don’t have time to discuss every subfield here, we will touch on one, as it’s the most popular of the lot, the most synonymized, albeit mistakenly, with AI, and the technology best poised to yield immediate benefits. I refer, of course, to machine learning (ML).

What is Machine Learning?

Used to describe the process by which computers alter the contextual value of data using previously seen data (i.e., “learn”), ML shows great promise and has become the poster child for what AI can do. It works by assigning context to and finding relationships between data points in larger data sets. This allows us humans to pass in vast amounts of data, tell the ML software what we want to find, then let it build the algorithm that gets us there.

In the context of more abstract programming, it means no longer supplying, as input, a single spreadsheet, and telling the computer that column A is a date, column B is a dollar amount, and column C is a decimal. Instead, it means telling computers how to uniquely identify dates, dollar amounts, and decimals, how to value them based on variances in the data, and how correlations between those data types should be weighted.

ML has the potential to help a wide array of entities, including yours, analyze large amounts of data to spot actionable patterns in variables otherwise neglected. For example, imagine you run a personal injury law firm and want to use ML to analyze your settlement history over the last several years. Simply by telling the ML software what you want, which in this case is the highest settlement possible, the software can analyze that data to find commonalities and trends the naked eye might have missed. It might find out that insurance company X will never settle for more than 18% above its initial offer, that Attorney Y always collects 12% more per settlement than everyone else, or that traffic accidents involving a plaintiff under the age of 16 will, on average, result in a 22% higher settlement. With that information, you could make better staffing, marketing, and strategic legal decisions that could greatly affect your firm’s success.

Like AI, however, machine learning is still just software, but with mathematics as its main ingredient. It is this math that still needs to be devised, validated and supervised by humans—though if done well, less each day.

With this in mind, how can ML help you? What kinds of problems can it solve? What sort of answers can it provide? The key to understanding its applications is to take AI’s lead and think more abstractly about your goals. Which numbers do you want to increase? Which do you want to decrease? Which successes do you want to replicate? Which failures do you want to determine causes for? What are the problem(s) keeping  you up at night? With the right approach, application, and input (i.e., data), these questions can be answered simply, even with open-source and off-the-shelf software.

So give some thought to these questions. While the answers you seek may require the procurement of new data, they may also live in the countless spreadsheets and databases you’ve been naturally accumulating for decades. When you’ve decided where you want to start, or if you need help sorting that out, reach out to us and we’ll be happy to light the trail.