◢ SALISH SIGINT
Node 03 · Live Acoustic

That old iPhone is a $0 streaming computer.
Plug a hydrophone into it.

An iPhone 8 sitting in your drawer has a 6-core A11, two ISP-fed mics, hardware H.264 + HEVC encoders, an LTE modem, a CoreML accelerator, and a battery. It is, no exaggeration, a better edge node than half the embedded gear we'd otherwise spec. The job: pair it with a hydrophone, keep it dry, push high-bitrate audio to a public ingest, and let CoreML flag the interesting seconds before they cost bandwidth.

Why iPhone, not Android, not a Pi

Latency-tuned audio

iOS Core Audio + AVAudioEngine clock the input within ~5 ms. Android's MediaRecorder / AAudio path is faster than it used to be, but iOS is still the easier win.

App-store hostile, in a good way

Apps like Larix Broadcaster, nanoStream, and Reincubate Camo exist and are still maintained. The same RTSP/SRT/RTMP chain that streamers use ports straight to a hydrophone.

CoreML on-device

A finetuned YAMNet → orca classifier runs at < 5% CPU on an A11. The phone can flag calls in real time and only push HD audio when something interesting fires.

Hardware path

   ┌──────── water ────────┐    ┌──────────────── deck ────────────────┐
   │                       │    │                                       │
   │  Aquarian H2a-XLR     │    │   XLR-F  ─►  Mogan-style XLR-to-      │
   │  (10Hz–100kHz, ±1dB)  │────┼──────►       Lightning interface       │
   │  20 m cable           │    │              (or Apogee MiC Plus       │
   │                       │    │               via USB-C lightning)     │
   └───────────────────────┘    │                            │           │
                                │                            ▼           │
                                │                ┌─────────────────────┐ │
                                │                │  iPhone 8 / SE 2    │ │
                                │                │  iOS 16+            │ │
                                │                │  Larix or PWA       │ │
                                │                │  CoreML orca model  │ │
                                │                │  H.264 + AAC LC     │ │
                                │                └──────────┬──────────┘ │
                                │                           │            │
                                │                           ▼ RTSP/SRT   │
                                │                  ┌─────────────────┐   │
                                │                  │  ingest server  │   │
                                │                  │  (MediaMTX)     │   │
                                │                  └────────┬────────┘   │
                                │                           ▼            │
                                │           HLS public + WebSocket meta  │
                                │           → salish-sigint dashboard          │
                                └──────────────────────────────────────-─┘

Bill of materials

PartNotesUSD
Old iPhone (7+ / 8 / SE 2)Anything with iOS 16+ and Lightning works$0–80 used
Aquarian Audio H2a-XLR20 m cable, real flat response, balanced output$219
Apogee Jam+ or Saramonic SmartRig IIXLR / 1/4" → Lightning preamp, +48 V phantom not required$60–155
Lightning to USB-C adapter (if newer iPhone)Apple's "USB Camera Adapter" trick — supports class-compliant USB audio$39
OWC Travel Power dock or wired 12 V → USBPier power if you have it$25
Pelican 1150 with foam cutoutPhone fits in lid; cable through gland$60
Cellular SIM (data-only) or dock Wi-FiMint Mobile / US Mobile / Visible$15/mo
Total (one-time)assuming free phone, cellular path~$385

Software stack — the easy way

Capture & encode: Larix Broadcaster (free)

Larix has been the de-facto iOS RTMP/RTSP/SRT pusher for a decade. Audio-only mode is one toggle. The settings that matter for hydrophone work:

Sample rate
48 kHz (96 kHz if your interface supports it — orca clicks reach 80 kHz)
Channels
Mono
Codec
AAC-LC at 256 kbps for archive · Opus 64 kbps for live preview
Connection
SRT to your ingest, with latency=2000 ms buffer for cellular
Auto-restart
ON — phone will drop and resume gracefully
Background audio
ON via Settings → Background Modes → Audio

Ingest: MediaMTX on a $5 VPS

# mediamtx.yml — ingest SRT, expose HLS + RTSP + WebRTC simultaneously
paths:
  hydrophone-pier70:
    source: publisher
    publishUser: bay-station
    publishPass: ${HYDROPHONE_KEY}

srt:
  enable: yes
  listen: :8890

hls:
  enable: yes
  variant: lowLatency
  segmentDuration: 1s

webrtc:
  enable: yes
  iceServers: [stun:stun.l.google.com:19302]

Spin up on a $5 Hetzner / Vultr box, point Larix at srt://your.host:8890?streamid=publish:hydrophone-pier70, done. HLS playback URL is https://your.host/hydrophone-pier70/index.m3u8 — paste into the Salish SIGINT dashboard.

Software stack — the no-install way (PWA)

If you don't want to install anything, modern Safari supports getUserMedia({audio: true}) + WebRTC + MediaRecorder. You can run this exact page from the iPhone, hit start, and stream. Tradeoff: less robust to backgrounding than Larix, and Safari does cap session length without a Lock-Screen-Audio shim.

// Run from iPhone Safari — works as a streaming source w/ no install
async function startHydrophone() {
  const stream = await navigator.mediaDevices.getUserMedia({
    audio: {
      sampleRate: 48000,
      channelCount: 1,
      echoCancellation: false,    // off, off, off
      autoGainControl: false,
      noiseSuppression: false,
    }
  });

  // Push to a media server with WHIP (WebRTC HTTP Ingestion Protocol)
  const pc = new RTCPeerConnection();
  stream.getTracks().forEach(t => pc.addTrack(t, stream));
  const offer = await pc.createOffer();
  await pc.setLocalDescription(offer);

  const r = await fetch('https://your.host/whip/hydrophone-pier70', {
    method: 'POST',
    headers: { 'Content-Type': 'application/sdp' },
    body: offer.sdp,
  });
  await pc.setRemoteDescription({ type: 'answer', sdp: await r.text() });
}

The audio settings that matter

Turn off echo cancellation, AGC, and noise suppression — every one of those filters is tuned to "remove non-speech" and a non-speech-removing filter on a hydrophone is exactly the wrong filter. iOS will quietly enable them by default unless you set the AVAudioSession category to .playAndRecord with mode .measurement.

On-device detection — CoreML

Run a YAMNet backbone (Google's 521-class audio classifier) finetuned on the OrcaSound corpus. Tiny model, trivial to convert to CoreML with coremltools. The phone does VAD-style analysis on a 1-s rolling window and pushes a metadata event on detection. The audio keeps streaming regardless; the metadata just lets the dashboard mark interesting moments.

# Train: PyTorch / TF, export to CoreML
import coremltools as ct
mlmodel = ct.convert(
    yamnet_finetuned,
    inputs=[ct.TensorType(shape=(1, 15600))],   # 0.975 s @ 16 kHz
    classifier_config=ct.ClassifierConfig(["orca-call", "vessel", "other"]),
    compute_units=ct.ComputeUnit.CPU_AND_NE,         # Neural Engine
)
mlmodel.save("OrcaDetector.mlpackage")
// Swift — push detection events as RTSP onMetadata or a sidecar WebSocket
let model = try OrcaDetector(configuration: .init())
let result = try model.prediction(audio: window)
if result.classLabel == "orca-call", result.classProbability["orca-call"]! > 0.7 {
  meta.send([
    "t": Date().timeIntervalSince1970,
    "event": "orca-call",
    "conf": result.classProbability["orca-call"]!,
  ])
}

Mounting & the boring stuff that breaks first

Public hydrophone networks worth comparing yourself to

Where it fits in the stack

Node 01 · Spectrum ← SigDigger SDR Console Node 02 · Telemetry ← LoRa Whale Buoy Node 04 · Fusion Salish SIGINT Mesh →
Salish SIGINT · Node 03 / 04 Field-tested with iPhone 8 + Aquarian H2a-XLR · CC-BY-SA