AI Strategy Derivation.

The Axiologic Solution’s STRATEGIZEit methodology for creating an IT Strategy has been extended to address the specific requirements of deriving an enterprise Artificial Intelligence (AI) strategy.

An AI strategy refers to an organization’s vision, plans, and roadmap for how AI capabilities generally (and maybe machine learning functionality) will be identified, acquired, tailored, integrated, and deployed to help achieve the agency’s business goals. The core components of any AI Strategy concern are the intersection of ML-based algorithms, data, infrastructure, skills, organization, and integration – with a sound management approach (change, governance, quality assurance).

Since AI is new to the marketplace, most government agencies need assistance creating a strategy and roadmap for how to navigate the AI space. Our approach is based on a sound AI framework which includes:

  • The AI capabilities – that will be used covering assisted AI, augmented AI, and autonomous AI.
  • The processes, workflows, functions, services, products, value streams – that are candidates to be supported, enhanced, or replaced by AI.
  • The data – primarily the training data that will be used to generate the various models that will be foundational to the AI.
  • The infrastructure/platform to run the AI – primarily focusing on data preparation, model training time, and AI inference time (running the ML models to get a prediction) – consisting of GPU clusters and traditional CPU based technologies.
  • The tools used throughout the AI lifecycle – design, data preparation, ML model training, validation, integration, packaging, version management, and monitoring.
  • The AI team – itself and the other sub-teams, and how they are organizationally aligned to support the AI throughout its lifecycle.
  • The partners – that will provide the data, ML models, or AI.
  • The set of policies, standards, best practices – that anchor the overall governance.

Our AI Strategy Derivation process, can be used to enhance government Digital Transformation strategies.

AI Governance.

As organizations start to use AI capabilities at greater frequency and in an increasing number of applications, they are becoming aware that AI is not traditional software and has unique “management” challenges, specifically around ensuring accuracy, transparency (explainable & interpretable), fairness, privacy, accountability, reliability, security, and safety.

Most (many) government organizations already have mature IT governance practices. So, why do they need AI governance? AI governance may share some practices with IT governance, but it’s a distinct discipline, particularly at this early stage of AI adoption and maturity. The goal of AI governance is not limited to just ensuring the effective creation and use of AI. In fact, the scope is much larger and encompasses a much wider topic of “trustworthiness,” which encompasses supply chain management, systems engineering, risk management, regulatory compliance, and ethics.

Since AI is new to most government organizations, AI governance will also be very new. AI Governance is also relatively new to the marketplace, with no significant obvious leader in the space (even companies such as Google, Microsoft, and OpenAI are actively researching this area). It is not uncommon for organizations that are new AI not even to have considered the need for AI governance (e.g., they don’t see the need; they are not sure who should do it; they don’t want to incur the expense). While AI organizations can have early successes with AI, the increased usage of AI won’t scale unless they have a pretty mature AI governance function; otherwise, AI projects will fail to produce value and may lead to significant organizational risk.

Axiologic Solutions offers service to define and implement AI Governance, along a maturity model.

Next-Generation Neuro-Symbolic Systems Architecture.

Over the last two years, we have seen a large number of AI-powered tools that can now bring expansive new capabilities to business applications. Many of these capabilities are generative AI that can generate new text, imagery, video, and audio (e.g., OpenAI ChatGPT) powered by large neural networks (e.g., large language models; large diffusion networks). We are seeing the increased integration of traditional symbolic software development techniques (e.g., writing out logic in a text-based programming language) with modern neural networks, creating the need for a neuro-symbolic software architecture (in short, NeSy AI). The term “neuro” (or sub-symbolic) in this case refers to the use of artificial neural networks, or connectionist systems, in the widest sense. The term “symbolic” refers to AI approaches that are based on explicit symbol manipulation (also known as good-old-fashioned AI).

The general promise of NeSy AI lies in the hopes of a best-of-both-worlds scenario, where the complementary strengths of neural and symbolic approaches can be combined in a favorable way. On the neural side, the desirable strengths would include trainability from (imperfect) raw data and robustness against faults in the underlying data, while on the symbolic side, one would like to retain the inherently high explainability and provable correctness of these systems, as well as the ease of making use of deep human expert knowledge in their design and function. In terms of functional features, utilizing symbolic approaches would help neural machine learning with issues like out-of-vocabulary handling, data quality, training from smaller data sets, recovery from errors, and in general, explainability.

These neuro-symbolic software architectures have unique functional and non-functional requirements and require unique development approaches.

We have also extended our ENGINEERai methodology to directly support the creation of neuro-symbolic systems.

AI Creation and Development.

Many organizations have some experience with creating ML models; but most do not know how to effectively transition ML to AI. We have defined a mature lifecycle for the creation of AI; some key steps include:

  • ML model identification (open source, commercial); model reuse, including freezing some weights
  • ML model (foundational) re/training on large GPU clusters, including GPT and diffusion based networks
  • ML model (foundational) fine tuning
  • ML model alignment, including policy determination

AI Service Acquisition and Assessment.

AI capabilities can be packaged as software or services and are available from COTS vendors, open-source projects, and other government agencies. We have extended our Acquisition service to include the specific needs of AI capabilities, such as:

  • Identification of trusted sources of AI
  • Evaluation of the trustworthiness of the AI (safety, bias, privacy, reliability, performance, understandability)
  • Optimizing training for a particular training budget vs. inference budget (Chinchilla Scaling Laws)
  • Understanding the TCO of the AI such as costs from ML model training, ML model fine-tuning, ML usage/inference

AI Integration.

Recent AI tools like ChatGPT, DALL-E, Stable Diffusion, and Mid Journey take text prompts as input and use them to produce some output either in the form of generated text or generated images.

These text prompts act as the “instructions” to the model. They are a type of “AI programming language”. To maximize the quality of the model’s output, prompts must be correctly specified (just like you need to correctly write software codes to create a correct program). The design/specification of prompts is known as “prompt engineering”.

We have packaged best practices for performing “prompt engineering” in a quasi-methodology known as PROMPTit. By using these prescriptive statements, we want to remove the randomness of prompt creation so that users can get the most benefit from the underlying AI they are accessing. Being able to direct AI-enabled tools properly is a very powerful skill that will maximize the value of the AI.

AI Plugin Development.

It is increasingly common to integrate symbolic-based tools into a neural network in a neuro-symbolic hybrid architecture. We extend the neural network inferencing capability with specialized symbolic tools, such as:

  • Imagery analysis: text detection, object detection, object classification, scene understanding
  • Specialized symbolic processing e.g., mathematical operations, transactions
  • Access to additional knowledge e.g., private enterprise knowledge graph extractors
  • Reasoners

Plugins are software tools that have to adhere to specific architectures dictated by the AI platform. We have designed some best practices for creating AI plugins that are reliable and highly performing.

AI Quality Assurance.

The development of AI is moving faster than the ability of regulators and for quality assurance personnel to keep up. There is currently no comprehensive regulatory framework in place to govern the quality and use of AI by the government. Similarly, there is no comprehensive framework for ensuring the quality of AI. Consequently, we have devised a series of best practices to ensure the quality of AI throughout its lifecycle, including safety, bias, privacy, reliability, and accuracy.

A&A for AI applications/systems, as well as traditional applications that incorporate AI.

The current government A&A process does not cover the subtleties of AI based software which is not traditional software. We have extended the traditional NIST based A&A process to include the specific needs of AI, including:

  • AI supply chain security
  • AI specific threats
  • AI technology security
  • AI-generated products security (fakes)
  • AI governance (models, alignment rules, etc.)
  • AI operational requirements (monitoring, retraining)