Generative Design 101

Generative Design 101

Generative Design 101

Generative design, also known as Ai aided design, the next frontier for designers who work in consumer electronics, manufacturing and software. Generative design harnesses the power of artificial intelligence (Ai) to develop new high-performance design concepts and iterations that help solve complex challenges. This Ai-generated design exploration process, can be explained as a design assistant that requires a designer to input design goals into the generative design software, along with parameters such as performance in order to produce and simulates new products.

Generative design tools that produce optimum forms for products and buildings without human intervention are set to transform both the physical world and the role of the designer. Generative Design explores how programming languages such as Processing can be used to create structures from sets of rules, or algorithms, to form the basis of anything from software interfaces, natural language conversations, patterned textiles and typography to lighting, sculptures, films and buildings.


  1. Deezeen: Design Superpowers
  2. Form Labs: Generative Design 101
  3. Generative Design Primer
  4. TrendHunter: Generative Design Examples
  5. Monolith: Monolith empowers people to understand, predict and optimize products dramatically faster using an intuitive AI software platform

Use the navigation to take a deep dive in to all the art and design forms using generative (Ai aided) programming to reinvent the very nature of creation.

Generative Art

Generative Art



Generative Art is a process of algorithmically generating new ideas, forms, shapes, colors or patterns. First, you create rules that provide boundaries for the creation process. Then a computer (or less commonly a human) follows those rules to produce new works. Generative art is performed by mathematical algorithms that were written by the artist. And the role of the artist is to create an autonomous system and define algorithms according to which, the Art is being created.

In contrast to traditional artists who may spend days or even months exploring one idea, generative code artists use computers to generate thousands of ideas in milliseconds. Generative artists leverage modern processing power to invent new aesthetics – instructing programs to run within a set of artistic constraints, and guiding the process to a desired result. This method vastly reduces the exploratory phase in art and design, and often leads to surprising and sophisticated new ideas. 

This Generative 101 example is Kaleidoscope. The person is controlling the visual, but the algorithm has been created beforehand and adhere mathematical theory of chaos.

The difference between traditional art also known as “manual” art, is the artist is working on the piece itself and artist is the sole creator of the work. Each piece is controlled by artist. Only the artist is responsible for the work.

In generative art — the beauty lies in building a system, a third force that dominates the Artist and the work itself. The artist is “painting” the formula according to which a detail extends to numerous details and transforms into a completed piece of work. Unique characteristics of a piece are written in an algorithm.

Generative Art by Frederik Vanhoutte: Creative coder, generative geometrist, medical radiation physicist refers to his generative art as  #Processing #CreativeCoding #GenerativeGeometry #NFT #CryptoArt | Follow Fredrik on Twitter

"Generative art is the ceding of control by the artist to an autonomous system. With the inclusion of such systems as symmetry, pattern, and tiling one can view generative art as being old as art itself. This view of generative art also includes 20th century chance procedures as used by Cage, Burroughs, Ellsworth, Duchamp, and others." - Cecilia Di Chio from the book Applications of Evolutionary Computation.

Anders Hoff
Generative art by Anders Hoff. This is part of his project “Inconvergent”, which explores the complex behavior that emerges from systems with simple rules.

Anders Hoff (a.k.a. inconvergent on Twitter) is a generative artist who is fascinated by patterns. He often finds it useful to start with a highly organized structure and to then look for ways to gradually disrupt it. Hoff says interesting results can often be found between the initial organized structure and the chaotic end result. He searches for enough order to be recognizable and enough chaos to break out of ordinary forms.

Gyre 35700
Gyre 35700, a generative art work by Mark Stock. This piece is Stock’s reflection on the hierarchy of currents and eddies in the ocean, and their little-understood effect on global climate change. It is a 42"x28" digital archival inkjet print on canvas (2012).

The Nature of Generative Art

I’ve always wondered what inspires intention and invention.  For many generative artists like me, we love to get back to the basics of visualizing spaceship earth, it’s natural forms, living organisms, trillions of interactions of molecules. Spaceship earth’s system is in constant movement and interaction between each other, strictly following the rules of the nature.

All natural phenomena — rain, snow, fire. They are all meant to be seen as random and chaotic, but they are all working altogether in constant change and interaction. Generative art is meant to be in constant change, to be an everlasting novelty. And one could interact with the visual using whatever sources — light, sound, spatial position of the physical objects. It’s information, for our art.

Tremendous number of living organisms use sound to communicate or exchange information. We can also connect it to our visuals and encounter the immediate changes. For example, depending whether the sound frequency is low or high, the visual lines might go up or down. That’s the most basic example, and you could only imagine how complicated the work could be if it takes each detail to account.

It is evident that the founding mothers of generative art tech have replicated the magic in the nature, evident in the force of our planet. The current generative art movement might be the closest thing to nature. I believe each of us is bringing back this unbroken interaction from the nature world.

The Practical Application of Generative Art

Today generative art has met commercial success in four areas:(1) Video games and gaming engines. Generative art is used to simultaneously generate and optimize images in real time in scenario based games. This is achieved by using rules based algorithmic art (code) that predicts the behavior of the images like landscapes. (2) Virtual Reality and 3 Dimensional environments. In film, Disney’s The Mandalorian, used a generative art gaming engine that produced stunning locations for it’s virtual production. (3) Augmented reality and visual projections in light design are being used and applied in theaters, museums and concerts. (4) Architecture-based mapping. Computation-based approaches in design have emerged in the last decades and rapidly became popular among architects e.g. Zaha Hadid Architects. The new expressions continue to grow at a race and pace that is disrupting the business models in commercial art.

Zaha Hadid Architects
Zaha Hadid Architects' Light Projection Show Transforms 18th Century Baroque Palace in Germany

Generative Art & Algorithmic Art

Many have asked what is the difference between Generative art and Algorithmic art?  Algorithmic art is a subset of generative art and is related by systems theory. The final output of algorithmic art is typically displayed on a computer monitor, printed with a raster-type printer, or drawn using a plotter.  Algorithmic art is also sometimes called code art or procedural art, because it is created by computer following a set of procedures laid out in code.

Algorithmic art dates back to the early 1940s by researchers at ENIAC, Bell Labs and GRAV who were pioneering the use of computers for creativity. Researchers like Michael Noll and Vera Molnár envisioned a new breed of artist-computer scientist. Today that vision has been realized.

Woman working on ENIAC — The first electronic general-purpose computer (1940s).
Computational Creativity and Code
IBM 7094 with IBM 7151 Console (1962) / Creative use of Computer Graphics by A. Michael Noll at Bell Labs (1962).

Algorithmic Artist Vera Molnar:

Artists Shaping Generative Art Scene.

From the pioneers to the present day practitioners here are a number of generative artists you should follow.

Vera Molnár

In the 1960s, Molnar co-founded several artist research groups: GRAV, who investigate collaborative approaches to mechanical and kinetic art, and Art et Informatique, with a focus on art and computing.

Katharina Brunner

Katharina Brunner is a generative artist and data journalist whose GitHub repository on Generative Art is a great resource for anyone looking to get started using the programming language R. “The R package generative art let’s you create images based on many thousand points.

Jon McCormack

Since the late 1980s McCormack has worked with computer code as a medium for creative expression. Inspired by the complexity and wonder of a diminishing natural world, his work is concerned with electronic “after natures” – alternate forms of artificial life that may one day replace the biological nature lost through human progress and development.

Margaret A. Boden

Margaret A. Boden is Research Professor of Cognitive Science at the University of Sussex. She is the author of Artificial Intelligence and Natural Man, expanded second edition (MIT Press), AI: Its Nature and Future, The Creative Mind, and other books. She was the 2018 recipient of the ACM-AAAI Allen Newell Award for contributions to the philosophy of cognitive science.

Mike Brondbjerg

Designer / developer / artist working in data viz, information & generative design. Currently at London City Hall Intelligence Unit working on Data Viz.

Roelof Pieters & Samim Winiger

Roelof Pieters and Samim Winiger, co-founders of, provide an exceptional timeline of computation creativity in their treatise On the Democratization & Escalation of Creativity.

Frederik Vanhoutte

During the daytime, a physics Ph.D. working as a medical radiation expert in a university hospital in Belgium. Together with a team of radiation oncologists, physicists, and nurses, I turn medical data into effective treatments for cancer patients.

Dr. Rebecca Anne Fiebrink

Rebecca Anne Fiebrink, HCI/ML researcher. Dr. Rebecca Fiebrink is a Reader at the Creative Computing Institute University of the Arts London and Department of Computing Goldsmiths, University of London

Generative Art Action Learning

Getting started with Generative Art has many avenues. There are many tools, programs, frameworks and languages that make it easy to start creating your own algorithmic art. Here are tools, and programs to help get you started. 

Processing: Our staff pick. This is a powerful programming language and development environment for code-based art.

openFrameworks: A popular open source C++ toolkit for generative and algorithmic art.

Cinder: An open source C++ library for creative coding.

C4: An open source iOS framework for generative art.

Unity: A powerful game engine that can help with generative art and large-scale installations.

PlayCanvas: A collaborative WebGL engine that works in real-time.

hg_sdf: A GLSL library for signed distance functions.

HYPE:  A collection of classes that does a lot of heavy lifting with minimal code required.

nannou:  An open source framework for creative coding in Rust. An open source collection of Clojure and ClojureScript design tools.

PixelKit: An open source Swift framework for live graphics.

OPENRNDR: An open source Kotlin library for generative art.

Phaser: An HTML5 framework for games that uses Canvas and WebGL.

Canvas-sketch: An HTML5 framework for generative artwork in JavaScript and your browser.

TouchDesigner:  Point cloud data and recent GPUs’ power to handle it in real-time all pointed to the need for a better point cloud workflow in TouchDesigner

vvvv: vvvv is a hybrid visual/textual live-programming environment for easy prototyping and development. It is designed to facilitate the handling of large media environments with physical interfaces, real-time motion graphics, audio and video that can interact with many users simultaneously.

Pure Data: Pure Data (or just Pd) is an open source visual programming language for multimedia. Its main distribution (aka Pd Vanilla) is developed by Miller Puckette

Notch: A node-based interface that’s familiar and intuitive to explore, allowing limitless possibilities simply by connecting logical building blocks. Timeline and animation editing, compositing and grading, all in one environment, designed with narrative in mind.

The following tools are all based on the theory of ornamental group—a specific classification that allocates patterns into categories according to their symmetry and describes its special aspects.

Adobe Illustrator and Photoshop: Within Adobe Illustrator and Photoshop, you can choose from pre-designed elements (or create them yourself) to generate a pattern instantly. Simple select your object then hit Object> Pattern> Make. The final product can be saved in any format.

Geo Pattern: Enter any combination of letters, and this fun tool will generate a random geometric pattern made up of polygons, interlocking circles, harmonic waves, and so on. Save options are available only in PNG format.

KORPUS: A similar free of charge program that transforms any word into a unique pattern. Based on Conway’s Law, it allows you to generate an unlimited number of ornaments. The outcome can be saved in PNG, JPG, or SVG.

Plain Pattern & Patternico: These are free analogs to Adobe Illustrator and Photoshop. Plain Pattern and Patternico can save you time during the setup mode. You can even upload your own SVG files and use them to create a pattern. Results available in PNG format.

EveryPixel: Everypixel is an algorithm that forms a layout independently using pre-installed elements: lines, objects, images. In just one cycle, it can automatically create a ton of different patterns. By using the same ornament, you can generate hundreds of options that will consist of the same elements but in different sizes, colors, and orientations against each other. Right now, you can download pre-made patterns, but soon, developers will upload the software to the public and will also teach neural networks this operation.

100 Years of Generative Art

Our abilities to represent complex creative problems is increasing. A fundamental shift in perspective is allowing us to revisit many creative problems. The following section presents generative creation and explores how it democratizes and escalates creativity.

Generative art has come a long way since the 1940s, as we rush toward 2040 we can see it in a full bloom. Generative art and automated algorithms still need human artists to help machine learning algorithms to grasp creative tasks. This chapter of Generative Art invites you to take it to the next place and time.

Generative Images, Photos & Videos


Generative Images, Photos, Videos & Art

Designing with Ai free’s designers to outsource mundane tasks and focus on solving for more complex problems. One of the time consuming and wasteful tasks is creating templated graphic and photo assets in infinite variations. It takes far too much time and is demotivating, when designers could be spending this time on more valuable product work.

Using the generative image, photo video and art applications designers can shift their focus from the mundane back to the magical.


AI Generated Media Virtual talent and digital human animation on demand with AI generated media.

Artisto App

Free video editor app with art filters and photo effects for any selfies, pictures, movies, animation and documentary.

TL-GAN model

Generating custom photo-realistic faces using AI. Controlled image synthesis and editing using a novel TL-GAN model


Vincent is a tool for illustrators that transforms rough sketches into a paintings from Van Gogh, Cézanne, or Picasso.


Neural network-based app that stylizes photos to look like works of famous artists. This one makes a classic portrait while Google Stadia does the same for games.

Reverse Prisma

Researchers from UC Berkeley converts impressionist paintings into a more realistic photo style.

Photorealistic Facial Expression Synthesis

Photorealistic facial expression synthesis from single face image can be widely applied to face recognition, data augmentation for emotion recognition or entertainment.

Photo Wake-Up:

3D Character Animation from a Single Photo from Chung-Yi Weng, Brian Curless, Ira Kemelmacher-Shlizerman

This Person Doesn't Exist Sketch Plugin

This sketch plugin that puts AI-generated faces into design mockups. It's a great application of a popular idea.

Ai Is The New Ui



Imagine having a conversation with a friend and asking them a question, only to have them stare at you silently for three seconds before answering. Would the conversation feel natural? Or would you feel awkward, like you’d done something wrong? Most importantly, would you do it again? 

Today, millions of people happily chat with Amazon Echo’s virtual assistant, Alexa.  On of my favorite Amazonian stories is the power of timely response. When the Echo was under development less than five years ago, voice recognition technology suffered an average delay in response time of almost three seconds. The team set a goal of two seconds for Echo, and was eventually able to bring it down to below 1.5 seconds before launch – a critical factor in the success of a device that has no screen or other interface to fall back on. Either people can talk to Alexa as they would a person, or the device is a failure.

As the Head of Research & Design for Alexa Devices at Amazon, I could not be more proud of Alexa’s success. Alexa shines as just one example of AI playing an ever more capable role across user interfaces (UI). 

As AI matures, many of the problems that hindered adoption in the past are disappearing. It’s now consistently being used to add frictionless intelligence to people’s interactions with technology, creating opportunities to make any interface both simple and smart – driving wider, faster adoption of technology, and providing better outcomes for people. 

In 2017 more than 5,400 IT and business executives, 79% agree that AI will help accelerate technology adoption throughout their organizations. In short, AI is poised to enable companies to improve the experience and outcome for every critical customer interaction. AI already plays a variety of roles throughout the user experience (UX). 

At the simplest level, it curates content for people, like the mobile app Spotify suggesting new music based on previous listening choices. In a more significant role, AI applies machine learning to guide actions toward the best outcome. Farmers are improving yields by implementing AI-enabled crop management systems: Blue River Technology’s tools combine computer vision and machine learning with their robotic systems to apply plant-by-plant fertilizer wherever needed. 

Using advanced algorithms means ‘LettuceBot’ not only takes care of pesky weeds among the lettuce crop, but also addresses growing conditions that are less than optimal – like identifying sprouts that are too close to each other, and removing the one least likely to thrive.

And at the height of sophistication, AI orchestrates. It collaborates across experiences and channels, often behind the scenes, to accomplish tasks. AI not only curates and acts based on its experiences, but also learns from interactions to help suggest and complete new tasks.

Designers are rapidly transitioning from traditional interfaces in to specialists in Visual, Voice, Sound, Gesture and Thought Interface design. 


Despite skepticism of AI as just another technology buzzword, its momentum is very real. 87% of executives we surveyed report they will invest extensively in AI-related technologies over the next three years. 

Generative Design + Machine Learning

CognitiveExperience.Design | DOSSIER 2030


Design is undergoing a transformation and a new generation of design leaders are using Cognitive Experience Design to craft customer experiences that reduce the time to task and that delight the users senses with new art forms that use algorithms to generate infinite possibilities. 

Machine Learning

Machine learning is an application of artificial intelligence (AI) that provides systems the ability to automatically learn and improve from experience without being explicitly programmed.

Generative Design

Generative design is an iterative design process that involves a program that will generate a certain number of outputs that meet certain constraints, and a designer that will fine tune the feasible region by selecting specific output or changing input values, ranges and distribution. Throughout the generative design process the designer learns to refine the program with each iteration as their design goals become better defined over time.

The output could be images, sounds, architectural models, animation, and much more. It is therefore a fast method of exploring design possibilities that is used in various design fields such as art, architecture, communication design, and product design.



A conversational UI assistant. You provide the content, then it helps you to create a layout and choose a visual style.

The Grid V3.

Chooses templates & presentation styles, retouches and crops photos — all by itself. Moreover, the system runs A/B tests to choose the most suitable pattern. |

Adaptive Modular Scale

Experimental computational design platform that generates design system tokens. |



The team learned how to answer the question, “What will the booked price of a listing be on any given day in the future?” so that its hosts could set competitive prices.

Sketch Confetti

Sketch Confetti. 
A plugin generates modern confetti patterns that fit into existing screen mockup.


Neural network-based app that stylizes photos to look like works of famous artists.

Assisted Writing

Assisted Writing re-imagines word-processing & explores new forms of writing, that allow authors to shift their focus from creation to curation, and write more joyfully.

Yandex Launcher

An Android launcher uses an algorithm to automatically set up colors for app cards, based on app icons.

Adobe Fontphoria

This Sensei experiment turns any letter image into a glyph, then creates a complete alphabet and font out of it. It can also apply the result to a physical object via augmented reality.

Individualized ux

Mutative Design

A well-though-out model of adaptive interfaces that considers many variables to fit particular users.

Anticipatory Design

A broader view of UX personalization and anticipation of user wishes.


This algorithm that deploys individualized phrases based on what kinds of emotional pleas work best on you. They also experiment with UI.

Mutative Design

A well-though-out model of adaptive interfaces that considers many variables to fit particular users.

Visual Design


Choose favorite styles, pick a color and voila, Logojoy generates endless ideas.

Variable Fonts

Parametric typography based on the idea of interpolation from several key variables: weight, width, and optical size. 

Oi Responsive Logo

Responsive Logo Reacts To The Sound Of Your Voice. Onformative developed software changes the logo shape, rippling through color gradients to the sound of a user’s voice.

Google AutoDraw

An experimental projects turns sketches into icons. It can help non-designers to use quality icons in their mockups.

Fashion. Architecture. Industrial.
 Music. Movies. Games. Urban & More

Autodesk Dreamcatcher

An algorithm generates many variations of a design using predefined rules and patterns.

Parametric Design

Zaha Hadid Architects bureau uses this term to define their generative approach to architecture.

Cognitive Movie Trailer

IBM Watson collaborated with 20th Century Fox to create the first-ever cognitive movie trailer for the movie Morgan.

Flow Machines

Flow Machines unveiled the first song to be composed by artificial intelligence, the Beatles-esque "Daddy’s Car”


Nike & Adidas have begun an Ai 3D Printing space race in footwear.

Generative Art

New forms of modern art. human/AI collaboration is an aesthetic dialogue similar to that employed with improvisational jazz.


Design with ml

Designing Machine Learning is a project by the Stanford d.School ML more accessible.


This JavaScript library for creative coding, making coding accessible for artists & designers.


A friendly Machine Learning for the Web. Everything you need to get up and started with ml5.

Teachable Machine

A fast, easy way to create machine learning models for your sites, apps, and more – no expertise or coding required.

Generative Design Leaders

Addie Wagenknecht

Twitter: @wheresaddie

Vera Molnár

Vera Molnár (born 1924) is a French media artist of Hungarian origin. She is considered to be a pioneer of computer art.


Twitter: @5putniko
 Born in 1985, Sputniko! is a Japanese/British artist based in Tokyo. Sputniko! is known for her film and multi-media installation works which explore the social and ethical implications of emerging technologies.

Mike Brondbjerg

Twitter: @mikebrondbjerg Designer / developer / artist working in data viz, information & generative design. Currently at London City Hall Intelligence Unit working on Data Viz.

Designing with Ai



At the nascent stages of the 4th Industrial Revolution, Artificial Intelligence (Ai) is radically accelerating change in the disciplines of human factors, ergonomics, industrial, information, interaction, interior design and architecture. Whether you are a product designer of connected devices (Internet of Things), websites, mobile applications or an architect or interior designer building a home, a commercial or building or an urban planner developing land use programs Ai and quantum computing are enabling the designer to make our experiences more intelligent for our users, customers and constituents. As we forge ahead, with command of the command line using intelligent tech to power our designs, designers can strive captivate customers with enduring relationships that get smarter with every usage.

Diagram 1.0

Confluence of Design Practices

The State of Participative Human Centered Design

The entanglement and convergence and of human centered design (HCD), Use-Centered Design Cognitive Ergonomics (UCD), Neuro-Ergonomics, and technology has accelerated over the last 30 years. HCD and UCD have traditionally focused on developing creative solutions to problems by involving the human perspective in every step of the process. HCD and UCD rely on in-field observation to identify needs and create products that customers have difficulty envisioning.

In parallel, Cognitive Ergonomics emerged in response to the design challenges associated with complex systems and machines, leveraging advances in cognitive psychology and artificial intelligence. Cognitive Ergonomics, as defined by the International Ergonomics Association, “is concerned with mental processes, such as perception, memory, reasoning, and motor response, as they affect interactions among humans and other elements of a system.” These systems can range from the cockpit instrumentation and controls for a fighter jet to the dashboard design of a car or the 3D anatomical models of human anatomy used in medicine. Cognitive Ergonomics topics include: mental workload, decision-making, human-system interaction, human reliability, work stress and training to improve both performance and human wellbeing. 

Neuro-Ergonomics has also developed as a subfield of Cognitive Ergonomics with a focus on enhancing human-system interaction using neural correlation to understand situational task demands.  It includes work on adaptive automation, which uses real-time assessments of the user’s workload performance to make the changes in the system to enhance performance. One of the applications is driving safety, particularly for older drivers with cognitive impairments or texting distractions. Neuro-Ergonomics includes Virtual Reality training as well as research into Brain-Computer Interfaces (BCIs) focused on using brain signals to operate external devices without motor input.

The Confluence of Design Practices

Practitioners critical of Human-Centric, User-Centric Design and design thinking rightfully state that the practices are devoid of any real scientific rigor for understanding how people interact with products, services, and experiences. In contrast, Cognitive and Neuro-Ergonomics rely on measurable neurological and physical risks and performance.

Business, government, academia and consumers of design demand outcomes that require the best of both worlds. An approach that equally balances the intuition and art with objective scientific rigor and intelligent technologies. This appeal has led us to the establishment of a new field of design called Cognitive Experience Design.

Cognitive Experience Design was established as a practice in 2014 by Joanna Peña-Bickley, a design technologist. As the former Global Chief Creative Officer at IBM, who was using IBM Watson (artificial intelligence) to invent market making products, services and customer experiences for business, government and consumers. 

Cognitive Experience Design unites the usage of Ai, Cognitive, Neuro-Ergonomics with HCD and UCD with a mission to enable every design practitioner to command their world with ethical principles, education and skills training, iterative agile processes, experiments and practical applications that solve for complex problems in every industry. Simply put, Cognitive Experience Design moves design practitioners from the business management fad of design thinking to the measurable magic of Design For Thinking & Doing.

Making Magic with Ai

Moving From Science Fiction To Enchanting Brand Experiences


Do you believe in magic?
I do. Most of my creations have been inspired by the belief that there is a unique dialog between fiction and invention. The comics of Trina RobbinsWendy PiniLouise SimonsonCharles MoultonArthur C. Clarke and the Classic Tales of the Brothers Grim have shaped my imagination and my quest to design enchanting experiences that use the Artificial Intelligence (Ai) and Internet of Things (IoT) as a canvas, data as a paint and enduring stories to create unique customer experiences that move us from science fiction to a magical reality.

This year’s Cannes Lions Festival of Creativity provided a unique setting to explore our connected world with a collective of creative leaders from around the world. In this presentation I shared how the combination of Ai + The IoT enables brands to command unseen forces through the use of everyday objects into monetizable media platforms that provide new revenue results for brands.


Today, Ai (Artificial Intelligence) and the IoT, led by connected nomadic devices, are flipping the advertising, marketing and communications industries on it’s head by making the physical space and things the new digital interface. Gone are the days that we can rely on an image or 30 second story alone to define a brand experience. Now you must orchestrate a dance of devices powered by new narratives and interactions to demonstrate a brand’s purpose in our lives.

In our quest to remove the pain points from our customer’s journey we must explore how Ai empowers us to create a new transformative narrative for brands that pivots us from a the singular super hero with super powers to group or community of kindred spirits coming from the mountaintop to create a collective intelligence accessible to all of humanity. No one does this better than the gaming industry. For most creators and marketeers alike this is new territory that relies on the use of this new canvas to enable a series of brand rituals that moves us from brand push to pull, objects to systems, authority to emergence and begs us to use a compass over maps to reimagine our businesses as centers of innovation that thrive in the fourth industrial revolution.

Image for post

Download Presentation | Explore How To Make Your Brand Intelligent

Defining Ai As A Canvas For Makers
Ai or Cognitive computing is a combination of technologies. There are three core that are shaping the NOW: Data Mining, Pattern Recognition and Natural Language Processing (NLP). However, we must consider them all to unlock the potential magic moments:

  1. Natural Language Generation: Producing text from computer data. Currently used in customer service, report generation, and summarizing business intelligence insights. Sample vendors: Attivio, Automated Insights, Cambridge Semantics, Digital Reasoning, Lucidworks, Narrative Science, SAS, Yseop.
  2. Speech Recognition: Transcribe and transform human speech into format useful for computer applications. Currently used in interactive voice response systems and mobile applications. Sample vendors: NICE, Nuance Communications, OpenText, Verint Systems.
  3. Virtual Agents: “The current darling of the media,” says Forrester (I believe they refer to my evolving relationships with Alexa), from simple chatbots to advanced systems that can network with humans. Currently used in customer service and support and as a smart home manager. Sample vendors: Amazon, Apple, Artificial Solutions, Assist AI, Creative Virtual, Google, IBM, IPsoft, Microsoft, Satisfi.
  4. Machine Learning Platforms: Providing algorithms, APIs, development and training toolkits, data, as well as computing power to design, train, and deploy models into applications, processes, and other machines. Currently used in a wide range of enterprise applications, mostly `involving prediction or classification. Sample vendors: Amazon, Fractal Analytics, Google,, Microsoft, SAS, Skytree.
  5. AI-optimized Hardware: Graphics processing units (GPU) and appliances specifically designed and architected to efficiently run AI-oriented computational jobs. Currently primarily making a difference in deep learning applications. Sample vendors: Alluviate, Cray, Google, IBM, Intel, Nvidia.
  6. Decision Management: Engines that insert rules and logic into AI systems and used for initial setup/training and ongoing maintenance and tuning. A mature technology, it is used in a wide variety of enterprise applications, assisting in or performing automated decision-making. Sample vendors: Advanced Systems Concepts, Informatica, Maana, Pegasystems, UiPath.
  7. Deep Learning Platforms: A special type of machine learning consisting of artificial neural networks with multiple abstraction layers. Currently primarily used in pattern recognition and classification applications supported by very large data sets. Sample vendors: Amazon AWS, Deep Instinct, Ersatz Labs, Fluid AI, MathWorks, Peltarion, Saffron Technology, Sentient Technologies.
  8. Biometrics: Enable more natural interactions between humans and machines, including but not limited to image and touch and visual recognition, speech, and body language. Currently used primarily in market research. Sample vendors: Amazon Rekognition, 3VR, Affectiva, Agnitio, FaceFirst, Sensory, Synqera, Tahzoo.
  9. Robotic Process Automation: Using scripts and other methods to automate human action to support efficient business processes. Currently used where it’s too expensive or inefficient for humans to execute a task or a process. Sample vendors: Advanced Systems Concepts, Automation Anywhere, Blue Prism, UiPath, WorkFusion.
  10. Text Analytics and NLP: Natural language processing (NLP) uses and supports text analytics by facilitating the understanding of sentence structure and meaning, sentiment, and intent through statistical and machine learning methods. Currently used in fraud detection and security, a wide range of automated assistants, and applications for mining unstructured data. Sample vendors: Amazon PollyIBM Watson, Basis Technology, Coveo, Expert System, Indico, Knime, Lexalytics, Linguamatics, Mindbreeze, Sinequa, Stratifyd, Synapsify.

These new Ai services present opportunities to shape better realities for our customers, fans and employees — As designer’s these technologies present us with an entirely new brief.


Making A Business Case for Magical Moments Powered By Ai

The market for Ai tech is flourishing. There are numerous startups and the internet giants racing to acquire them, there is a significant increase in investment and adoption by enterprises. A Narrative Science survey found last year that 38% of enterprises are already using AI, growing to 62% by 2018. Forrester Research predicted a greater than 300% increase in investment in artificial intelligence in 2017 compared with 2016. IDC estimated that the AI market will grow from $8 billion in 2016 to more than $47 billion in 2020.


In just 2 years, Ai promises to transform society on the scale of the industrial, technical, and digital revolutions before it. Machines that can sense, reason, and act will accelerate solutions to large-scale problems in myriad of fields, including science, finance, medicine and education, augmenting human capability and helping us to go further, farther, faster. Buoyed by Moore’s Law and fed by a deluge of data, AI is at the heart of much of today’s technical innovation.


Connecting Fiction To Inspire Your Next Market Making Invention

Modern marketers believe that brands are now defined as the sum total of the interactions and experiences you have with it in connected space and time. This means that a gesture, word, command or a skill or sometimes combination of the four could define why we use a brand or stay loyal to it.


Thanks to a century of science fiction where technology is indistinguishable from magic, changing societal norms and the acceleration of innovation our customers, clients and employees have higher expectations of the brand experiences they choose to be loyal to. 


People expect more than automation, they expect new inventive services that are increasingly personalized, human, interoperable and ones that provide exceptional value to their busy day. For instance, research showed in a connected car people expect it to drive itself, but truly desire to see it perform self-learning, healing, socializing and configuring tasks that make mobility delightful. Thanks to the fiction of the Jetson’s, Knight Rider, Star Trek and Rendezvous with Rama the same expectations can be found in what people want for connected home, kitchens, bathrooms, gyms, work and entertainment venues.


When identifying the moments you want to own in your customer’s journey it begs us to pause and traverse centuries of experience with myths and legend, and ask what it would be like if part of the physical world was magic. What if this mirror could speak? What if spinach gave me super strength? What if my bag was magic? What if my doctor had superpowers to heal? What if my toys had secret lives? What if I could give people regenerative healing powers?


When we pause to dream and quench our curiosity it opens an opportunity to connect fiction to invention and innovation that will place your brand as a maker of it’s own market.

The Mirrorworld

CognitiveExperience.Design | Design Dossier


building a 1-to-1 map of almost unimaginable scope. When it's complete, our physical reality will merge with the digital universe.

Inside the mirror­world, agents like Siri and Alexa will take on 3D forms that can see and be seen. Their eyes will be the embedded billion eyes of the matrix. They will be able not just to hear our voices but also, by watching our virtual avatars, to see our gestures and pick up on our microexpressions and moods. Their spatial forms—faces, limbs—will also increase the nuances of their interactions with us. 

The mirrorworld will be the badly needed interface where we meet AIs, which otherwise are abstract spirits in the cloud.

The Mirrorworld
Full-size, 3D digital twin is more than a spreadsheet. Embodied with volume, size, and texture, it acts like an avatar.

For the mirrorworld to come fully online, we don’t just need everything to have a digital twin; we also need to build a 3D model of physical reality in which to place those twins. Consumers will largely do this themselves: When someone gazes at a scene through a device, particularly wearable glasses, tiny embedded cameras looking out will map what they see. The cameras only capture sheets of pixels, which don’t mean much. But artificial intelligence—embedded in the device, in the cloud, or both—will make sense of those pixels; it will pinpoint where you are in a place, at the very same time that it’s assessing what is in that place. The technical term for this is SLAM—simultaneous localization and mapping—and it’s happening now.

a platform for developing AR apps that can discern large objects in real time.

EVERYTHING CONNECTED TO the internet will be connected to the mirrorworld. And anything connected to the mirrorworld will see and be seen by everything else in this interconnected environment. Watches will detect chairs; chairs will detect spreadsheets; glasses will detect watches, even under a sleeve; tablets will see the inside of a turbine; turbines will see workers around them.


the next major paradigm shift that will unfold over the next two decades. This shift layers the digital world that exists today, like Internet of Things, 3D models, SLAM, and digital mapping, onto our physical world.
Copy link
Powered by Social Snap