Site icon Cognitive Architect

AI Patterns

This page is still very preliminary and will see extensive modifications soon, but here are a few initial notes.

Coarse Taxonomy of AI

This is a very preliminary taxonomy that still has significant gaps that I will fill in the near future.

Traditional Machine Learning

Deep Learning

Reinforcement Learning

Natural Language Processing (NLP)

Computer Vision

Signal Processing & Pattern Recognition

Optimization

Reasoning & Automated Planning (AP)

Data Mining & Clustering

Patterns of AI

To be extended.

Freezing Neural Net Layers

Freeze output layer and potentially layers adjacent to it to accelerate training, esp. for CNNs. The more similar the tasks the less layers need to be retrained. The idea is to leverage the knowledge already present in a net trained on similar tasks.

Multi-Head Neural Nets

The idea is to use a backbone network and multiple heads for specific tasks – the backbone will train to capture the overall information and the heads use them to form answers for their specific subtasks. For instance, in object detection it is often desirable to both localize the object (regression task) and to recognize or even identify it (classification task) which can be modeled with two heads.

Agent Actions via Event Sourcing / CQRS

Event Sourcing captures each state change of an application in an event object. It is a well-known cloud technique and for instance allows to easily reconstruct the state of a microservice after it died. The same technique can be used for AI agents: Reinforcement learning has techniques like replay buffers and hindsight experience replay which require resampling of past actions. It is also useful to automatically capture actions for debugging so developers can afterwards step through the agent’s run and belief state and the same resilience advantage from cloud applies, since agents are oftentimes trained in cloud environments.

Mashups / Smart Ensembling

There are many classifiers and cloud services covering the same functionality. For instance, there are many translation APIs and many object recognition neural networks. Ensembling itself is a well-known AI pattern, but here we also assume a mashup enabler which transforms resources to facilitate the combination of their results. For instance, we can align the formats of the translation APIs from multiple cloud vendors or of multiple neural nets. This can include additional selection, ranking and abstraction steps. For instance, we can select only question answering services that are specific to medicine, rerank their answers and abstract their output, i.e. if one system responds H1N1 and the other H3N2 we can still unify them as responses for Influenza A which is much more informative than discarding them.

Embeddings

Graph and language embeddings like word2vec, sentence2vec, doc2vec, node2vec, even more domain-specific embeddings like hotel2vec

Facebook has published on applying embedding based retrieval (EBR) to their product here.

Typical vector arithmetic examples exist like queen – woman + man = king or that the distances between countries and their capitals or words in one language and their translation in the other could be approximately the same. It seems to vary how much these properties exist in practice, but it is interesting that they exist to some degree.

Feature Stores

AWS SageMaker Feature Store, Databricks Feature Store, DoorDash Feature Store, Google Feast (Google Blog), Hopsworks, and more.

Additional Patterns from the Literature

There are already multiple books to draw from, esp. “Deep Learning Patterns and Practices” by Andrew Ferlitsch, “Distributed Machine Learning Patterns” by Yuan Tang and “Machine Learning Design Patterns” by Lakshmanan et al. – the following incomplete list mentions patterns already captured in the literature:

To be continued.

Exit mobile version