Amplify AI Platform

UNC System's New AI Platform

Amplify

 

 
The University of North Carolina System

Amplify: Secure AI Chat Platform — Beta Release

Chat safely with multiple LLMs in one interface — with enterprise guardrails.

Amplify Interface Overview
 

We’re excited to launch the Beta version of Amplify to the UNC System organization. Amplify provides a
secure, organization‑controlled way to use leading AI models—like OpenAI GPT‑4 and Anthropic Claude—
through a single, user‑friendly interface.

With built‑in privacy, guardrails, and secure data storage, Amplify enables responsible AI at scale.


Access Amplify

View FAQs

Use your sytem office credentials to login.

Security & Governance

  • Data sovereignty: Chat data and uploads remain under organizational control; not used to train external models.
  • Secure prompting: Prompts and outputs are not captured by LLM providers.
  • Guardrails: Filters for hallucinations, sensitive information, and inappropriate language.
  • Administrative control: Centralized configuration for access, costs, and feature flags.

Interface Highlights

Conversations

Chat with multiple models, tune precision vs. creativity, and keep your history organized.

Conversations View

Assistants

Build custom assistants powered by your documents and data—perfect for departmental workflows.

Assistant Builder

Prompt Templates & Helpers

Reusable prompts and built‑in helpers generate summaries, CSVs, slides, and more—fast.

Prompt Templates

Key Features

Multi‑Model Access

Use GPT‑4, Claude, and more from one interface.

Custom Assistants

Attach files and data locally to ground responses.

Safety Guardrails

Controls for hallucinations, sensitive info, and language use.

Embeddable URLs and APIs

Create embeddable urls and/or APIs to integrate with organizational websites.

Questions or feedback? Send to help@northcarolina.edu

 

Amplify is the new AI platform for the System Office.  Amplify is an open source platform originally developed by Vanderbilt University and released to the public community.  The platform is built in AWS and provides for a secure and responsible platform to interact with a variety of LLMs in one application.  The application includes a library of models hosted by AWS Bedrock and has the ability to bring in both OpenAI and Google models for use.  Data uploaded and shared in the environment stays in the UNCSO tenant and does not get shared with any of the large language model providers.

Amplify FAQs

Data Security and Use within Amplify

When you use public generative AI tools, the data you enter is often shared with the company that created the tool and may be used for various purposes, including model training. Amplify, on the other hand, is an internal UNC System Office tool. Our agreements with the AI model providers prohibit them from using our data for model training. This ensures that any information you enter into Amplify remains secure and is not used by any external entities.

The UNC System Office classifies data into three levels of sensitivity. At this time, you may use information classified as Level 1 (Public), Level 2 (Internal), Level 3B (Regulated) within Amplify. However, protected personally identifiable information (PII) such as SSNs, Driver License Numbers, Passport Numbers, financial account information, credit card information protected by PCI DSS, and data protected under regulations like HIPAA, GLBA, CJIS are strictly prohibited.

What data types are supported?

Currently Amplify works with most text-based files. While image and video files are not currently supported, the following file types are supported:

*If your file fails to upload or be recognized, you may try saving it in a different format (e.g., converting a complex PDF to a plain text file or a different PDF version). 

  • Comma-Separated Values (.csv) 
  • Compressed file (.zip) 
  • Excel spreadsheets (.xlsx)  
  • Hypertext Markup Language – HTML (.html) 
  • JavaScript (.js) 
  • JSON format (.json)  
  • Markdown (.md) 
  • Plain text (.txt)  
  • Portable Document Format (.pdf) 
  • PowerPoint Presentation (.pptx) 
  • Python format (.py) 
  • Word documents (.docx) 

Amplify has a send feedback menu option that will open your default email client with the associated support email address.  A ticket will be created for review.  UNCSO IT will build roadmaps of features and/or bugs to work on either internally or collaboratively with the Amplify open source community to continually build out the feature set based on requests.

Please refer to the Amplify User Guide for feature and usage.

Amplify User Guide

Core Features besides Chat

  • Assistants – similar to GPTs these act as scoped helpers or bots to provide you information that is scoped to your business need.  Upload specific documents, point to specific web data sources and provide custom instructions to help answer questions more quickly and only based on the knowledge you provide it.
    • Add a path which creates a subdomain that you can embed in your websites for direct access to your assistant.
  • Prompt Templates – These are pre-written starting points for your conversations. Using a prompt template can save you time and ensure you provide the AI with the necessary context for effective responses, especially for common tasks.
  • Custom Instructions – Similar to “Gems” in Gemini, you might find an option to add “Custom Instructions.” This allows you to provide overarching guidelines or persona details that the AI should always adhere to in your chats, making its responses more consistent with your preferences.
  • API access – create personal or resource account api keys to access Amplify assets.  If chatbot interfaces already exist these apis can be used to send and receive prompt/responses from Amplify

Temperature settings in Amplify? 

When using generative AI tools, temperature refers to how precise or how creative you want the output of the tool to be. Adjusting the temperature can help you tailor the output to meet your specific needs. You can select a temperature value between 0 and 1, and it affects how predictable or diverse the output will be.  

  • Low Temperature (Closer to 0): Setting the temperature closer to 0 will produce more deterministic and repetitive responses. At a low temperature, the model is more likely to select the most probable words or phrases given the context, leading to text that is often more focused and precise but less varied and creative. 
  • High Temperature (Closer to 1): As the temperature approaches 1, the model’s responses become more stochastic or random. This means that less probable words or phrases have a higher chance of being selected, resulting in responses that are more diverse, creative, and less predictable. 

When should I use the different temperatures? 

Setting the temperature of a chat is primarily based on what your goal is with that conversation. If you’re looking for precise, accurate information or responses, a lower temperature is generally more suitable. This might include technical explanations, specific instructions, or factual content. When your goal is to generate creative writing, brainstorming ideas, or you’re looking for a variety of responses to explore different perspectives, a higher temperature value can help achieve that by introducing more novelty into the text. The temperature setting is a useful tool for experimentation and can serve as a way to adjust the model to more effectively adapt to your specific, unique use cases.

Higher values will allow, but do not guarantee, longer answers.3.0. If you do not need highly verbose answers we would recommend leaving at the default Average or turning down to concise to save on the number output tokens used and reduce the cost.

  • Concise
  • Average
  • Verbose

Amplify provides access to a variety of Large Language Models.  UNCSO is currently releasing the following models for use.  We will add/remove models as more advanced ones become available at acceptable token costs.   All models provided by AWS Bedrock will have additional guardrails applied. 

  • No Conversational Memory Between Chats: Amplify AI models do not “remember” information from one chat conversation to another. If you tell it your name in one chat, and then open a new chat (even with the same model), you’ll need to re-introduce yourself or any prior context. Each new chat begins with a fresh memory slate.
  • No Web Search Capabilities for Bedrock Models: At this time, amplify does not have the capability to search the web or review specific websites you give it. Only files you directly upload, and its own training base are used to answer questions.  The only models with live web searching are those GPT models added from OpenAI.  GPT-4o & GPT4-mini
  • Prompt chaining: Prompt chaining is simply breaking down a complex prompt into a series of smaller, more manageable prompts that work together in sequence. Instead of asking an AI to do everything at once, you guide it through a step-by-step process where the output of one prompt becomes the input for the next prompt.
  • Creation of images, analysis of video and audio: As of right now the creation of images is not available along with the upload and analysis of video and audio files.