Presentation Recorder
Overview
Section titled “Overview”Presentation Recorder is a native macOS application built with SwiftUI and Apple’s ScreenCaptureKit framework. It captures screen content, system audio, microphone input, and camera footage simultaneously — producing timestamped MP4 recordings ideal for presentations, tutorials, and demos. The app features a live preview, real-time audio level metering, and configurable video quality presets ranging from 15 fps to 120 fps.
Features
Section titled “Features”- Simultaneous Screen + Camera Recording — Captures both a screen or window and the webcam feed at the same time, outputting separate timestamped MP4 files for flexible post-production
- Configurable Video Quality — Five presets (Low through Extreme) controlling frame rate (15–120 fps) and pixel scale (1x–3x), with HDR support via
SCStreamConfiguration.Preset - Real-Time Audio Metering — Uses Apple’s Accelerate framework (
vDSP_rmsqv,vDSP_maxv) to compute RMS and peak power levels per channel, displayed as a live level indicator in the UI - Microphone + System Audio Capture — Independent toggles for microphone and application audio, with the option to exclude the app’s own audio from the recording
- Content Sharing Picker — Integrates
SCContentSharingPickerfor selecting specific displays, windows, or applications to capture — including multi-window and multi-app modes - Live Capture Preview — Renders incoming
IOSurfaceframes via aCALayer-backedNSViewwith proper aspect ratio handling - Transparent Window UI — The app window uses
NSVisualEffectViewwith.ultraThinMaterialblending, giving it a modern translucent macOS look
How It Works
Section titled “How It Works”The architecture follows a clear separation: ScreenRecorder orchestrates the capture session, CaptureEngine wraps SCStream and yields frames via Swift’s AsyncThrowingStream, and CameraRecorder manages the AVCaptureSession independently.
The capture pipeline processes video and audio on dedicated dispatch queues:
private let videoSampleBufferQueue = DispatchQueue(label: "com.example.apple-samplecode.VideoSampleBufferQueue")private let audioSampleBufferQueue = DispatchQueue(label: "com.example.apple-samplecode.AudioSampleBufferQueue")private let micSampleBufferQueue = DispatchQueue(label: "com.example.apple-samplecode.MicSampleBufferQueue")The PowerMeter class handles multiple PCM buffer formats (float, int16, int32) with Accelerate-optimized DSP, converting raw samples to calibrated dB levels via a MeterTable lookup:
func process(buffer: AVAudioPCMBuffer) { // Handles float, int16, and int32 channel data // Uses vDSP_rmsqv for average power and vDSP_maxv for peak}Testing
Section titled “Testing”The project includes XCTest-based unit tests in PresentationRecorderTests/. The PowerMeterTests suite covers key audio processing scenarios:
- Zero-buffer produces valid zero levels
- Float buffer processing yields correct non-zero average and peak values
MeterTablecorrectly clamps power values at boundariesprocessSilence()resets accumulated values
Tech Stack
Section titled “Tech Stack”| Platform | macOS (native) |
| Language | Swift |
| UI Framework | SwiftUI |
| Screen Capture | ScreenCaptureKit (SCStream, SCRecordingOutput) |
| Camera | AVFoundation (AVCaptureSession) |
| Audio DSP | Accelerate (vDSP) |
| Persistence | SwiftData |
| Concurrency | Swift async/await, AsyncThrowingStream |
| Testing | XCTest |
Source Code
Section titled “Source Code”The source code is available on the project’s GitHub repository.