A brief of Artificial Intelligence (AI) and Key terms to know about Responsible AI

FLASH SALE
ฝากข่าว โดย :

AI is built and used responsibly and ethically.
AI advances international competitiveness and national security.
AI serves society broadly, not narrowly.Responsible AI

Artificial Intelligence (AI)

AI is a broad set of techniques used to train computers to complete tasks that would otherwise require human intelligence, such as answering questions, generating data and recognizing objects.

Large Language Model (LLM)

Models trained on large amounts of text data that can perform a wide variety of language tasks, including text summarization, generation, and categorization. These models can perform generative tasks like text generation and so there is some overlap between LLMs and generative AI. Example: GPT models

A brief history of AI

Artificial Intelligence
1950s: the field of computer science that seeks to create intelligent machines that can replicate or exceed human intelligence.
Machine Learning
1959: subset of AI that enables machines to learn from existing data and improve upon that data to make decisions or predictions.
Deep Learning
2017: a machine learning technique in which layers of neural networks are used to process data and make decisions.
Generative AI
2021: create new written, visual, and auditory content given prompts or existing data.

Key terms to know for Responsible AI

AI : A broad set of techniques used to train computers to complete tasks that would otherwise require human intelligence, such as answering questions, generating data and recognizing objects.
Algorithm: a generally applicable framework that can be used to develop an AI model. There are a variety of AI algorithms, including decision trees, neural networks and transformers. An algorithm is not deployed directly but is trained on data to develop an AI model. For example, a transformer model could be trained on large volumes of written text to develop an AI model that can generate new text. “Algorithm” is sometimes used as a catchall term for an AI system which is often unhelpful. Public policy conversations should more precisely address the different layers of the value chain – i.e. model/application – including as part of addressing AI risks which emerge primarily at the application layer.
Model: Emerging from the training of an algorithm on data, a model performs one or more generally applicable tasks, for example content generation, pattern detection or recommendation. Models are typically not deployed directly but incorporated by an AI developer into an AI application, turning the generally applicable functionality to a specific real-world use case.
AI application/system: A finished AI application that incorporates AI models alongside other software and inputs and is deployed in a real-world use case as part of a broader process or decision-making system, for example as part of a decision on whether to award someone credit.
Generative AI: models that can create new data, including visual content, text, audio, code etc.
Large language models: models trained on large amounts of text data that can perform a wide variety of language tasks, including text summarization, generation, and categorization. These models can perform generative tasks like text generation and so there is some overlap between LLMs and generative AI.
Image generation models: a type of generative AI that can create images.
Multimodal models: models that can accept inputs and generate outputs over multiple modalities, or types of data, such as text, images, and video.
Foundation models: models that are trained on a broad set of unlabeled data that can be used for different tasks, with minimal fine-tuning.

Ref : Responsible AI at Microsoft
www.microsoft.com