Use Windows AI in Your WPF Application
With the rise of Copilot+ PCs that have a neural processing unit (NPU), you as a developer get new possibilities to integrate artificial intelligence (AI) in your Windows desktop applications. Instead of consuming a cloud-based AI model, you can use local AI models.
In this blog post, you will learn how to build a WPF application that uses a local small language model (SLM) called Phi Silica that comes pre-installed with Windows 11. Everything that you do with Phi Silica stays on your machine. Let’s start by looking at Copilot+ PCs.
What Is a Copilot+ PC?
A Copilot+ PC is a modern kind of Windows 11 PC that has not only a central processing unit (CPU) and a graphical processing unit (GPU), but also a neural processing unit (NPU). The latter can be used to run AI tasks completely locally on your machine, in a very optimized way, both fast and with low power consumption.
Copilot+ PCs have an NPU that can run 40 trillion operations per seconds (TOPS) or more. A trillion is a number with 12 zeros. 40 trillion is equal to 40.000.000.000.000. That’s a massive number of operations per second.
What Exactly is the NPU?
The NPU is usually not a completely separate processing unit, but integrated into the CPU. Modern processor generations like the Intel Core Ultra 200v series or the Qualcomm Snapdragon X Series have an NPU that supports 40 TOPS or more. The latter one is an Arm-based CPU. For this blog post, I’ll be using a Surface Laptop 7 with a Qualcomm Snapdragon X Elite processor.
In the Task Manager under the performance tab you can see – besides the CPU and the GPU – the NPU:

So, on Copilot+ PCs you can make use of local AI that runs on the power efficient NPU. Power efficient means that the NPU does not drain your battery. Copilot+ PCs are known for their fantastic battery life that lasts a full working day. That’s not a marketing slogan, it’s really true and it’s amazing. That’s one of the reasons why I love the Surface Laptop 7.
The NPU allows you to access the most advanced AI features and models on your device. To learn more about Copilot+ PCs and the NPU, check out the official docs: Copilot+ PCs developer guide.
What’s in for You as a Developer?
On a Copilot+ PC, you as a developer can use the Windows Copilot Runtime (WCR) APIs that are part of the Windows App SDK. The Windows Copilot Runtime APIs are often also just called Windows AI APIs. There are different APIs available as part of the WCR APIs:
- Phi Silica – This is a local small language model. You will learn how to use it in a WPF app in this blog post
- AI Text recognition – Recognize text in images. Learn more about AI Text recognition.
- AI Imaging – Sharpen images, detect objects, describe content. Learn more about AI Imaging.
- And more…
Now let’s start using that NPU and Phi Silica in a WPF project. But first, let’s set up your machine.
Set up Your Machine
At the time of writing this blog post (May 2025), you need the Insider Preview version of Windows 11 on your Copilot+ PC. That version contains the required Windows Copilot Runtime (WCR) APIs. You can sign up for the Windows Insider program via “Settings 🡆 Windows Update”. There you find under More options the Windows Insider program:

When signing up for the Windows Insider Program, you can select between different channels. Here they’re sorted from very experimental to stable:
- Canary channel – Get previews of latest platform changes. These builds can be unstable and without documentation. Only recommended for highly technical users.
- Dev channel – Get Windows 11 preview builds with new ideas and long lead features. Also here, the stability of the builds can be low. Channel is, as the name says, recommended for developers and technical enthusiats.
- Beta channel – A stable environment with pre-release features of Windows 11, to which you can provide feedback.
- Release Preview Channel – Gives you access to the next version of Windows 11 before it is generally available. Microsoft recommends this channel also for commercial users.
You can read more about the WIndows Insider Preview program on https://www.microsoft.com/windowsinsider.
For my machine, I selected the Dev channel. At the time of writing, 26200.5570 is the Windows 11 build version that I used:

Now, with the machine set up, let’s use Windows AI in a WPF application.
Set up a WPF project
The idea is to include a prompt in a WPF application that uses the local Phi Silica Small Language Model to generate an answer.
I start here with a new .NET 9.0 WPF project, but the steps to include Windows AI in your existing WPF application are the same. Also for Windows Forms, they are the same.
In Visual Studio 2022, I create a new .NET 9.0 project with the template WPF Application. I call the project WpfWindowsAIApp.
To use Windows AI in the project, you must adjust a few things in your .csproj file:
- Adjust
TargetFrameworkelement – The Windows App SDK that contains the Windows Copilot Runtime (WCR) APIs requires a specific target OS version. This means that you must change the value of theTargetFrameworkelement by appending a specific Windows version. I changed the value from net9.0-windows to net9.0-windows10.0.22621.0. You can read more about this here in the official docs. - Add
RuntimeIdentifierselement – It’s recommended to add this element with the supported architectures win-x86;win-x64;win-arm64. This can be useful when you create a self-contained deployment of your app where you want to include platform-specific assets. - Add
WindowsPackageTypeelement – Include this element and set it’s content to None. This means that your app will be treated as an unpackaged app, which is necessary if you want to use the Windows App SDK and its Windows Copilot Runtime (WCR) APIs in your WPF application that is not packaged in a so-called MSIX package.
Below you see the PropertyGroup element from my .csproj file with the adjusted TargetFramework element and the added elements RuntimeIdentifiers and WindowsPackageType.
<PropertyGroup>
<OutputType>WinExe</OutputType>
<TargetFramework>net9.0-windows10.0.22621.0</TargetFramework>
<RuntimeIdentifiers>win-x86;win-x64;win-arm64</RuntimeIdentifiers>
<Nullable>enable</Nullable>
<ImplicitUsings>enable</ImplicitUsings>
<UseWPF>true</UseWPF>
<WindowsPackageType>None</WindowsPackageType>
</PropertyGroup>
Setting the WindowsPackageType to None will add the two linked files MddBootstrapAutoInitializer.cs and WindowsAppSDK-Versioninfo.cs to your project. These files are necessary to initialize the Windows App SDK for unpackaged apps. You can see the files in the Solution Explorer below. So, in case you’re wondering why these files are there, you know now that they’re needed to intialize the Windows App SDK for your unpackaged WPF app.

The next step is to reference the Microsoft.WindowsAppSDK NuGet package. To use the WCR APIs, you need to use the latest experimental version, which is 1.8.250410001-experimental1 at the time of writing this blog post.
Instead of manually referencing that version, you can just copy/paste the full .csproj file content below that contains the corresponding package reference for the Windows App SDK.
<Project Sdk="Microsoft.NET.Sdk">
<PropertyGroup>
<OutputType>WinExe</OutputType>
<TargetFramework>net9.0-windows10.0.22621.0</TargetFramework>
<RuntimeIdentifiers>win-x86;win-x64;win-arm64</RuntimeIdentifiers>
<Nullable>enable</Nullable>
<ImplicitUsings>enable</ImplicitUsings>
<UseWPF>true</UseWPF>
<WindowsPackageType>None</WindowsPackageType>
</PropertyGroup>
<ItemGroup>
<PackageReference Include="Microsoft.WindowsAppSDK"
Version="1.8.250410001-experimental1" />
</ItemGroup>
</Project>
Now everything is set up and the local Phi Silica model can be used in your WPF app. So first, let’s create the user interface. The idea is to create a simple Window like below with a TextBox to enter a prompt, a Generate button to start the generation, and a bigger readonly TextBox with the response.

To create this user interface, replace the Grid in the MainWindow.xaml file with the one below. As you can see, the Grid has defined three rows. In the first row is a StackPanel with the TextBox for the prompt. In the second row (the one with index 1, so Grid.Row="1") is another StackPanel with the Button to generate the response and a TextBlock to show a status. Then, in the third row of the Grid is a readonly TextBox to show the response. Note the named elements btnGenerate, txtStatus and txtResponse. These names are used to access these elements from the code-behind file MainWindow.xaml.cs. Instead of code-behind, you could of course also use the MVVM-pattern, but let’s keep things simple here, as that pattern doesn’t change the way how you would use Windows AI in your WPF app. Besides the names to access the elements in the code-behind file, note also that the Generate Button has an event handler for the Click event with the name ButtonGenerate_Click.
<Grid>
<Grid.RowDefinitions>
<RowDefinition Height="Auto"/>
<RowDefinition Height="Auto"/>
<RowDefinition/>
</Grid.RowDefinitions>
<StackPanel Margin="10 0 10 10">
<TextBlock Text="Prompt:" Margin="5"/>
<TextBox x:Name="txtPrompt"/>
</StackPanel>
<StackPanel Grid.Row="1" Orientation="Horizontal" Margin="10 0 10 10">
<Button x:Name="btnGenerate" Content="Generate"
Click="ButtonGenerate_Click" Padding="10 3"/>
<TextBlock x:Name="txtStatus" Margin="10 0"
VerticalAlignment="Center" Foreground="LightGreen"/>
</StackPanel>
<TextBox Grid.Row="2" x:Name="txtResponse"
Margin="10 0 10 10" Padding="10"
IsReadOnly="True" TextWrapping="Wrap"
ScrollViewer.VerticalScrollBarVisibility="Auto" />
</Grid>
As this is a .NET 9.0 application, I also give it a dark look by setting the ThemeMode property of the Appication object in the App.xaml file to Dark (Learn more about Windows 11 Theming in WPF in this blog post):
<Application x:Class="WpfWindowsAIApp.App"
xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
xmlns:local="clr-namespace:WpfWindowsAIApp"
StartupUri="MainWindow.xaml"
ThemeMode="Dark">
<Application.Resources>
</Application.Resources>
</Application>
Now let’s write the code to generate a response when the Generate Button is clicked. In the code-behind file MainWindow.xaml.cs you can add the code below, so that you can compile your application. There’s a TODO comment in the ButtonGenerate_Click event handler. Note the two using directives at the top of the file for the namespaces Microsoft.Windows.AI and Microsoft.Windows.AI.Generative. You need them for the code that you will write. Also the namespace Windows.Foundation is included. This namespace comes like the other two namespaces from the referenced Windows App SDK NuGet package. You will need it later to show the progress of the generation. Only System.Windows is a native .NET namespace, which contains for example WPF’s Window class.
using System.Windows;
using Microsoft.Windows.AI;
using Microsoft.Windows.AI.Generative;
using Windows.Foundation;
namespace WpfWindowsAIApp;
public partial class MainWindow : Window
{
public MainWindow()
{
InitializeComponent();
}
private async void ButtonGenerate_Click(object sender, RoutedEventArgs e)
{
// TODO: Generate a response based on the prompt
}
}
When you run the application for the very first time, you might get the popup below. It says that you have to install a compatible Windows App Runtime for the Windows App SDK 1.8-experimental1. This is necessary for unpackaged apps that are using the Windows App SDK. So click on Yes.

When you click on Yes in the dialog above, you land on this page: https://learn.microsoft.com/en-us/windows/apps/windows-app-sdk/downloads. There you can download the corresponding Windows App Runtime for your processor architecture.
After this is done, you should be able to run the WPF project. In the next step, let’s implement the ButtonGenerate_Click event handler.
Implement the Windows AI logic
You can see the implemented ButtonGenerate_Click event handler in the code snippet below. At the beginning of a try-finally-block the btnGenerate‘s IsEnabled property is set to false. This deactivates the button, as now a generation is running. The button is enabled again at the end in the finally block.
Also at the beginning, the text of the txtResponse TextBox is set to an empty string. Next, the txtStatus.Text is set to a message that tells the user that the model is being loaded into memory.
To load the model into memory, the LanguageModel class (Microsoft.Windows.AI.Generative namespace) is used. Its static GetReadyState() method is called. This method returns a value of the AIFeatureReadyState enumeration. If that value is EnsureNeeded, the model needs to be loaded into memory, else it’s already there. To load the model into memory, the static EnsureReadyAsync method is called. If the returned result does not have the status Success (value of the enum AIFeatureReadyResultState), an exception is thrown. This means, after this first if statement, you’re ready to use the language model. When you open Task Manager after this step and you look at the NPU in the Performance tab, you will see that the NPU has loaded something (= the Phi Silica model) into its shared memory.
To use the Phi Silica language model, the static CreateAsync method is called in the code snippet below. It returns a LanguageModel instance. This is a disposable object, so the using keyword is used in front of the declaration of the languageModel variable. This has the effect that the LanguageModel instance gets automatically disposed at the end of the ButtonGenerate_Click event handler. After the LanguageModel instance was created, the txtStatus.Text property is set to Generating answer…. The text value of the txtPrompt TextBox is stored in a prompt variable, and now a response can be generated.
To generate the response, the GenerateResponseAsync method is called on the LanguageModel instance. The prompt variable is passed as an argument to that method. The returned response (Type: LanguageModelResponseResult) has a Text property that contains the generated response. Its value is assigned to the Text property of the txtResponse TextBox, which means that at this point that TextBox shows the response to the user. That’s it. In the finally block the btnGenerate is enabled again and the txtStatus.Text is set to an empty string.
private async void ButtonGenerate_Click(object sender, RoutedEventArgs e)
{
try
{
btnGenerate.IsEnabled = false;
txtResponse.Text = "";
txtStatus.Text = "Loading model into memory...";
if (LanguageModel.GetReadyState() == AIFeatureReadyState.EnsureNeeded)
{
var result = await LanguageModel.EnsureReadyAsync();
if (result.Status != AIFeatureReadyResultState.Success)
{
throw new Exception(result.ExtendedError.Message);
}
}
using LanguageModel languageModel = await LanguageModel.CreateAsync();
txtStatus.Text = "Generating answer...";
string prompt = txtPrompt.Text;
var response = await languageModel.GenerateResponseAsync(prompt);
txtResponse.Text = response.Text;
}
finally
{
btnGenerate.IsEnabled = true;
txtStatus.Text = "";
}
}
So, as you can see, the code above is pretty straight-forward. It’s not super-complex and actually quite easy to write.
Now let’s try it. To do this, I enter in the WPF Window below the prompt “How can I learn skateboarding?”. It takes a bit of time and then the response comes back. As you can see, Phi Silica knows even about skateboarding.

Show Progress While Generating a Response
To generate a response for a given prompt, the application uses right now the following two statements:
var response = await languageModel.GenerateResponseAsync(prompt);
txtResponse.Text = response.Text;
These statements have the disadvantage that a user does not see the progress. The application doesn’t show anything until the full response is generated, and then that full response is shown to the user. But as everything runs locally on the NPU, it can take quite a few seconds until the full response is generated. So, not showing any progress is not so good in this case. Wouldn’t it be great if the user could see the progress? If they could see how Phi Silica generates the response step by step?
That’s of course possible too. To show the progress, you replace the two statements above with the code block below. First, an AsyncOperationProgressHandler variable is created (Namespace Windows.Foundation). As generic type arguments, the LanguageModelResponseResult class and the type string are used. In the handler, the WPF Dispatcher is used to marshall the work back to the UI thread. On the UI thread, the string parameter of the handler with the name str is appended to the Text property of the txtResponse TextBox.
Then, after defining that handler, the GenerateResponseAsync method is called on the LanguageModel instance as before, but this time the method is not awaited. Instead, the result is stored in an asyncOp variable. That variable is of type IAsyncOperationWithProgress<LanguageModelResponseResult, string>, and it has a Progress property to which the created handler is assigned. After this, the asyncOp is awaited, and now it will call the progress handler multiple times during the asynchronous operation.
AsyncOperationProgressHandler<LanguageModelResponseResult, string> handler = async (response, str) =>
{
await Dispatcher.InvokeAsync(() =>
{
txtResponse.Text += str;
});
};
var asyncOp = languageModel.GenerateResponseAsync(prompt);
asyncOp.Progress = handler;
await asyncOp;
To see the big picture, the following code snippet contains the full ButtonGenerate_Click event handler with the adjustments from the code snippet above.
private async void ButtonGenerate_Click(object sender, RoutedEventArgs e)
{
try
{
btnGenerate.IsEnabled = false;
txtResponse.Text = "";
txtStatus.Text = "Loading model into memory...";
if (LanguageModel.GetReadyState() == AIFeatureReadyState.EnsureNeeded)
{
var result = await LanguageModel.EnsureReadyAsync();
if (result.Status != AIFeatureReadyResultState.Success)
{
throw new Exception(result.ExtendedError.Message);
}
}
using LanguageModel languageModel = await LanguageModel.CreateAsync();
txtStatus.Text = "Generating answer...";
string prompt = txtPrompt.Text;
AsyncOperationProgressHandler<LanguageModelResponseResult, string> handler = async (response, str) =>
{
await Dispatcher.InvokeAsync(() =>
{
txtResponse.Text += str;
});
};
var asyncOp = languageModel.GenerateResponseAsync(prompt);
asyncOp.Progress = handler;
await asyncOp;
}
finally
{
btnGenerate.IsEnabled = true;
txtStatus.Text = "";
}
}
With the new code, you can see live how the answer is generated step by step. To be able to see this in this blog post, I created the litte video below with another prompt. The response is generated with the local language model, and while doing this, the Surface Laptop 7 even runs on battery. So, this video gives you also a little impression about the speed that you can expect. It’s not extremly fast, but also not extremly slow. I would say it’s very usable and useful at this speed. Remember that everything runs locally, no cloud backend with massive processing power is used. And remember that efficiency is also a key feature of local AI, which means that generating responses does not put a heavy load on your battery.
Summary
You learned in this blog post how to use Windows AI in a WPF application. Everything runs in a power efficient way locally on your machine, which means there’s no need to set up a model in the Cloud. As you’ve seen, the code to use Windows AI in your Windows desktop application is very straight forward. This makes integrating AI into your application not a super-complex task. I think Windows AI is super powerful to extend existing WPF, WinForms and WinUI apps with AI features.
Note that the Windows Copilot Runtime (WCR) APIs are part of the experimental Windows App SDK, which means they can change in the future.
Get the Code
I’ve uploaded the code of the WPF project used in this blog post to a repository on GitHub. Remember, to run this application, you must have a Copilot+ PC and you must set it up as described in this blog post. But maybe you just want to explore the code, and to do this, you don’t need a Copilot+ PC, you can do this from any computer.
Find more Examples
First of all, there’s a fantastic documentation about Windows AI here: https://learn.microsoft.com/windows/ai.
To try and play with Windows AI, I highly recommend the AI Dev Gallery app: https://learn.microsoft.com/windows/ai/ai-dev-gallery.
I hope you enjoyed reading this blog post. If there are any questions or ideas, please use the comments section below.
Happy coding,
Thomas
Leave a Reply