.Make certain being compatible with numerous platforms, including.NET 6.0,. Internet Platform 4.6.2, and.NET Specification 2.0 and above.Lessen reliances to stop variation conflicts as well as the need for binding redirects.Recording Sound Files.One of the primary capabilities of the SDK is audio transcription. Developers may transcribe audio files asynchronously or in real-time. Below is actually an instance of just how to transcribe an audio file:.using AssemblyAI.making use of AssemblyAI.Transcripts.var client = new AssemblyAIClient(" YOUR_API_KEY").var records = await client.Transcripts.TranscribeAsync( brand-new TranscriptParams.AudioUrl="https://storage.googleapis.com/aai-docs-samples/nbc.mp3". ).transcript.EnsureStatusCompleted().Console.WriteLine( transcript.Text).For neighborhood documents, similar code could be made use of to accomplish transcription.await utilizing var stream = brand new FileStream("./ nbc.mp3", FileMode.Open).var transcript = await client.Transcripts.TranscribeAsync(.flow,.brand-new TranscriptOptionalParams.LanguageCode = TranscriptLanguageCode.EnUs.).transcript.EnsureStatusCompleted().Console.WriteLine( transcript.Text).Real-Time Sound Transcription.The SDK also reinforces real-time audio transcription utilizing Streaming Speech-to-Text. This feature is actually especially helpful for applications demanding quick handling of audio records.making use of AssemblyAI.Realtime.await using var transcriber = new RealtimeTranscriber( new RealtimeTranscriberOptions.ApiKey="YOUR_API_KEY",.SampleRate = 16_000. ).transcriber.PartialTranscriptReceived.Subscribe( records =>Console.WriteLine($" Partial: transcript.Text "). ).transcriber.FinalTranscriptReceived.Subscribe( transcript =>Console.WriteLine($" Final: transcript.Text "). ).await transcriber.ConnectAsync().// Pseudocode for receiving audio coming from a mic for instance.GetAudio( async (piece) => wait for transcriber.SendAudioAsync( portion)).await transcriber.CloseAsync().Utilizing LeMUR for LLM Apps.The SDK integrates with LeMUR to make it possible for programmers to create huge language version (LLM) apps on vocal data. Listed here is an example:.var lemurTaskParams = brand new LemurTaskParams.Prompt="Deliver a short recap of the transcript.",.TranscriptIds = [transcript.Id],.FinalModel = LemurModel.AnthropicClaude3 _ 5_Sonnet..var feedback = await client.Lemur.TaskAsync( lemurTaskParams).Console.WriteLine( response.Response).Audio Cleverness Versions.Additionally, the SDK possesses built-in help for audio knowledge models, permitting conviction review and also various other innovative components.var transcript = await client.Transcripts.TranscribeAsync( brand new TranscriptParams.AudioUrl="https://storage.googleapis.com/aai-docs-samples/nbc.mp3",.SentimentAnalysis = real. ).foreach (var cause transcript.SentimentAnalysisResults!).Console.WriteLine( result.Text).Console.WriteLine( result.Sentiment)// FAVORABLE, NEUTRAL, or NEGATIVE.Console.WriteLine( result.Confidence).Console.WriteLine($" Timestamp: result.Start - result.End ").For more information, visit the formal AssemblyAI blog.Image resource: Shutterstock.