CoreML
Collection
Models for Apple devices. See https://github.com/FluidInference/FluidAudio for usage details
•
5 items
•
Updated
•
3
On‑device multilingual ASR model converted to Core ML for Apple platforms. This model powers FluidAudio’s batch ASR and is the same model used in our backend. It supports 25 European languages and is optimized for low‑latency, private, offline transcription.
For quickest integration, use the FluidAudio Swift framework which handles model loading, audio preprocessing, and decoding.
import AVFoundation
import FluidAudio
Task {
// Download and load ASR models (first run only)
let models = try await AsrModels.downloadAndLoad()
// Initialize ASR manager with default config
let asr = AsrManager(config: .default)
try await asr.initialize(models: models)
// Load audio and transcribe
let samples = try await AudioProcessor.loadAudioFile(path: "path/to/audio.wav")
let result = try await asr.transcribe(samples, source: .system)
print(result.text)
asr.cleanup()
}
For more examples (including CLI usage and benchmarking), see the FluidAudio repository: https://github.com/FluidInference/FluidAudio
Apache 2.0. See the FluidAudio repository for details and usage guidance.