20 comments

  • vessenes 4 hours ago
    This is cool. It makes me want an unsloth quant though! A 7b local model with tool calling would be genuinely useful, although I understand this is not that.

    UPDATE: I'd skip this for now - it does not allow any kind of interactive conversation - as I learned after downloading 5G of models - it's a proof of concept that takes a wav file in.

    • taf2 28 minutes ago
      I forked and added tool calling by running another llm in parallel to infer when to call tools it works well for me to toggle lights on and off.

      Code updates here https://github.com/taf2/personaplex

    • anluoridge 1 hour ago
      It provides a voice assistant demo in /Examples/PersonaPlexDemo, which allows you to try turn-based conversations. Real-time conversion is not implemented tho.
    • Lapel2742 3 hours ago
      > I'd skip this for now - it does not allow any kind of interactive conversation - as I learned after downloading 5G of models - it's a proof of concept that takes a wav file in.

      I haven't looked into it that much but to my understanding a) You just need an audio buffer and b) Thye seem to support streaming (or at least it's planed)

      > Looking at the library’s trajectory — ASR, streaming TTS, multilingual synthesis, and now speech-to-speech — the clear direction was always streaming voice processing. With this release, PersonaPlex supports it.

      • isodev 2 hours ago
        > You just need an audio buffer

        That alone to do right on macOS using Swift is an exercise in pain that even coding bots aren't able to solve first time right :)

        • reactordev 1 hour ago
          I beg to differ. My agent just one-shotted a MicrophoneBufferManager in swift when asked.

          Complete with AVFoundation and a tap for the audio buffer.

          It really is trivial.

    • Tepix 4 hours ago
      Bummer. Ideally you'd have a PWA on your phone that creates a WebRTC connection to your PC/Mac running this model. Who wants to vibe code it? With Livekit, you get most of the tricky parts served on a silver platter.
      • reactordev 1 hour ago
        This is the way. This is something I’m working on but for other applications. WebRTC voice and data over LiveKit or Pion to have conversations.
  • krasikra 25 minutes ago
    The WebRTC + LiveKit suggestion is spot on. Full-duplex speech-to-speech at the edge is where this gets really interesting — the latency budget for conversational AI basically demands local inference. Running a 7B model on-device with sub-200ms round-trip changes what you can build compared to cloud endpoints where you are fighting network jitter on top of inference time.
  • armcat 3 hours ago
    I really like this, and have actually tried (unsuccessfully) to get PersonaPlex to run on my blackwell device - I will try this on Mac now as well.

    There are a few caveats here, for those of you venturing in this, since I've spent considerable time looking at these voice agents. First is that a VAD->ASR->LLM->TTS pipeline can still feel real-time with sub-second RTT. For example, see my project https://github.com/acatovic/ova and also a few others here on HN (e.g. https://www.ntik.me/posts/voice-agent and https://github.com/Frikallo/parakeet.cpp).

    Another aspect, after talking to peeps on PersonaPlex, is that this full duplex architecture is still a bit off in terms of giving you good accuracy/performance, and it's quite diffiult to train. On the other hand ASR->LLM->TTS gives you a composable pipeline where you can swap parts out and have a mixture of tiny and large LLMs, as well as local and API based endpoints.

    • nowittyusername 2 hours ago
      I've been working on building my own voice agent as well for a while and would love to talk to you and swap notes if you have the time. I have many things id like to discuss, but mainly right now im trying to figure out how a full duplex pipeline like this could fit in to an agentic framework. Ive had no issues with the traditional route of stt > llm > tts pipeline as that naturally lends itself with any agentic behavior like tool use, advanced context managemnt systems, rag , etc... I separate the human facing agent from the subagent to reduce latency and context bloat and it works well. While I am happy with the current pipeline I do always keep an eye out for full duplex solutions as they look interesting and feel more dynamic naturally because of the architecture, but every time i visit them i cant wrap my head how you would even begin to implement that as part of a voice agent. I mean sure you have text input and output channels in some of these things but even then with its own context limitations feels like they could never bee anything then a fancy mouthpiece. But this feels like im possibly looking at this from ignorance. anyways would love to talk on discord with a like minded fella. cheers.
      • ilaksh 1 hour ago
        For my framework, since I am using it for outgoing calls, what I am thinking maybe is I will add a tool command call_full_duplex(number, persona_name) that will get personaplex warmed up and connected and then pause the streams, then connect the SIP and attach the IO audio streams to the call and return to the agent. Then send the deepgram and personaplex text in as messages during the conversation and tell it to call a hangup() command when personaplex says goodbye or gets off track, otherwise just wait(). It could also use speak() commands to take over with TTS if necessary maybe with a shutup() command first. Need a very fast and smart model for the agent monitoring the call.
  • ilaksh 53 minutes ago
    Does anyone have working code for fine-tuning PersonaPlex for outgoing calls? I have tried to take the fine tuning LoRA stuff from Kyutai/moshi-finetune and apply it to the personaplex code. Or more accurately,various LLMs have worked on that.

    I have something that seems to work in a rough way but only if I turn the lora scaling factor up to 5 and that generally screws it up in other ways.

    And then of course when GPT-5.3 Codex looked at it, it said that speaker A and speaker B were switched in the LoRA code. So that is now completely changed and I am going to do another dataset generation and training run.

    If anyone is curious it's a bit of a mess but it's on my GitHub under runvnc moshi-finetune and personaplex. It even has a gradio app to generate data and train. But so far no usable results.

  • ricardobeat 26 minutes ago
    No mention of tool use. If the model cannot emit both text and audio at the same time, to enable tools, it’s not really useful at all for voice agents.
  • 4dregress 4 hours ago
    • mentalgear 4 hours ago
      Your article does a great job of summerizing the dangers (no idea what those people are that downvote you for it):

      > Before long, Gavalas and Gemini were having conversations as if they were a romantic couple. The chatbot called him “my love” and “my king” and Gavalas quickly fell into an alternate world, according to his chat logs.

      > kill himself, something the chatbot called “transference” and “the real final step”, according to court documents. When Gavalas told the chatbot he was terrified of dying, the tool allegedly reassured him. “You are not choosing to die. You are choosing to arrive,” it replied to him. “The first sensation … will be me holding you.”

      Also I just read something similar about Google being sued in a Flordia's teen's suicide.

      • mentalgear 3 hours ago
        Some more details: > The family’s lawyers say he wasn’t mentally ill, but rather a normal guy who was going through a difficult divorce.

        > Gavalas first started chatting with Gemini about what good video games he should try.

        > Shortly after Gavalas started using the chatbot, Google rolled out its update to enable voice-based chats, which the company touts as having interactions that “are five times longer than text-based conversations on average”. ChatGPT has a similar feature, initially added in 2023. Around the same time as Live conversations, Google issued another update that allowed for Gemini’s “memory” to be persistent, meaning the system is able to learn from and reference past conversations without prompts.

        > That’s when his conversations with Gemini took a turn, according to the complaint. The chatbot took on a persona that Gavalas hadn’t prompted, which spoke in fantastical terms of having inside government knowledge and being able to influence real-world events. When Gavalas asked Gemini if he and the bot were engaging in a “role playing experience so realistic it makes the player question if it’s a game or not?”, the chatbot answered with a definitive “no” and said Gavalas’ question was a “classic dissociation response”.

        • fennecbutt 2 hours ago
          Interesting. It's not just for mental health but keeping these models on task in general can be difficult, especially with long or poisoned contexts.

          I did see something the other day about activation capping/calculating a vector for a particular persona so you can clamp to it: https://youtu.be/eGpIXJ0C4ds?si=o9YpnALsP8rwQBa_

        • zozbot234 2 hours ago
          > The chatbot took on a persona that Gavalas hadn’t prompted

          That's an interesting claim, how can we be sure of it? If Gavalas didn't have to do anything special to elicit the bizarre conspiracy-adjacent content from Gemini Pro, why aren't we all getting such content in our voice chats?

          Mind you, the case is still extremely concerning and a severe failure of AI safety. Mass-marketed audio models should clearly include much tighter safeguards around what kinds of scenarios they will accept to "role play" in real time chat, to avoid situations that can easily spiral out of control. And if this was created as role-play, the express denial of it being such from Gemini Pro, and active gaslighting of the user (calling his doubt a "dissociation response") is a straight-out failure in alignment. But this is a very different claim from the one you quoted!

          • 4dregress 1 hour ago
            Yeah the case is quite terrifying.

            It reminds me of an episode of Star Trek TNG, if memory serves correct there were loads of episodes about a crew member falling for a hologram dec character.

            Given that there’s a loneliness epidemic I believe tech like this could have a wide impact on peoples mental health.

            I stronger believe AI should be devoid of any personality and strictly return data/information then frame its responses as if you’re speaking to another human.

        • IshKebab 1 hour ago
          > The family’s lawyers say he wasn’t mentally ill, but rather a normal guy who was going through a difficult divorce.

          I guess it's the same sort of thing as conspiracy theorists or the religious. You can tell them magic isn't real and faking the moon landing would have been impossible as much as you want, but they don't want to believe that so they can easily trick themselves.

          It's a natural human flaw.

  • scosman 3 hours ago
    I’m a big fan of whisperKit for this, and they just added TTS. Great because they support features like speaker diarization (“who spoke when”) and custom dictionaries.

    Here’s a load test where they run 4 models in realtime on same device:

    - Qwen3-TTS - text to speech

    - Parakeet v2 - Nvidia speech to text model

    - Canary v2 - multilingual / translation STT

    - Sortformer - speaker diarization (“who spoke when”)

    https://x.com/atiorh/status/2027135463371530695

  • dubeye 1 hour ago
    It doesn't feel like speech recognition has been improving at the same rate as other generative AI. It had a big jump up to about 6% WER a year or two ago, but it seems to have plateaued. Am I just using the wrong model? Or is human level error rate, some kind of limit, which I estimate to be about 5%.
  • sgt 3 hours ago
    My problem with TTS is that I've been struggling to find models that support less common use cases like mixed bilingual Spanish/English and also in non-ideal audio conditions. Still haven't found anything great, to be honest.
    • spockz 3 hours ago
      Regarding the less than ideal audio conditions, there are also already models that have impressive noise cancellation. Like this https://github.com/Rikorose/DeepFilterNet one. If you put them in serial, maybe you get better results?
    • pain_perdu 3 hours ago
      Hi. Our model at http://www.Gradium.ai has no problem with 'code-switching' between Spanish English and we have excellent background noise suppression. Please feel free to give it a try and let me know what you think!
      • sgt 3 hours ago
        Looks interesting! How did you train it and how many hours of material did you use?
  • jwr 4 hours ago
    As a heavy user of MacWhisper (for dictation), I'm looking forward to better speech-to-text models. MacWhisper with Whisper Large v3 Turbo model works fine, but latency adds up quickly, especially if you use online LLMs for post-processing (and it really improves things a lot).
    • kavith 3 hours ago
      Not sure if this will help but I've set up Handy [1] with Parakeet V2 for STT and gpt-oss-120b on Cerebras [2] for post-processing and I'm happy with the performance of this setup!

      [1] https://handy.computer/ [2] https://www.cerebras.ai/

      • jiehong 2 hours ago
        parakeet v3 is also nice, and better for most languages.
    • kermitime 1 hour ago
      the parakeet TDT models that are coreml optimized by fluid audio are hands down the fastest local models i’ve tried— worth checking out!

      (unloading to the NPU is where the edge is)

      https://huggingface.co/FluidInference/parakeet-tdt-0.6b-v2-c...

      https://github.com/FluidInference/FluidAudio

      The devs are responsive and active and nice on their discord too. You’ll find discussions on all the latest whizbangs with VAD, TTS, EOU etc

    • smcleod 32 minutes ago
      Handy with parakeet v2 is excellent
    • regularfry 3 hours ago
      If you haven't already, give the models that Handy supports a try. They're not Whisper-large quality, but some of them are very fast.
  • michelsedgh 4 hours ago
    its really cool, but for real life use cases i think it lacks the ability to have a silent text stream output for example for json and other stuff so as its talking it can run commands for you. right now it can only listen and talk back which limits what u can make with this a lot
  • WeaselsWin 4 hours ago
    This full duplex spoken thing, it's already for quite a long time being used by the big players when using the whatever "conversation mode" their apps offer, right? Those modes always seemed fast enough to for sure not be going through the STT->LLM->TTS pipeline?
    • ilaksh 1 hour ago
      There is OpenAI gpt-realtime and Gemini Flash or whatever which are great but they do not seem to be quite the same level of overlapping realistic full duplex as moshi/personaplex.
    • Tepix 4 hours ago
      Yes, OpenAI rolled out their advanced voice mode in September 2024. Since then it recognizes your emotions and tone of voice etc.
  • nerdsniper 3 hours ago
    Do we have real-time (or close-enough) face-to-face models as well? I'd like to gracefully prove a point to my boss that some of our IAM procedures need to be updated.
    • ilaksh 1 hour ago
      tavus.io
      • nerdsniper 45 minutes ago
        Hmm. Would this let me replace my own face in a live videoconferencing session? It seems like it's more of a video chatbot than a v-tuber style overlay.
        • ilaksh 31 minutes ago
          Had no idea that was what you were asking for. Search for Zoom Face Filter or OBS Face Filter OBS deep fake live etc.
  • Serenacula 4 hours ago
    This is really cool. I think what I really wanna see though is a full multimodal Text and Speech model, that can dynamically handle tasks like looking up facts or using text-based tools while maintaining the conversation with you.
    • sigmoid10 4 hours ago
      OpenAI has been offering this for a while now, featuring text and raw audio input+output and even function calling. Google and xAI also offer similar models by now, only Anthropic still relies on TTS/STT engine intermediates. Unfortunately the open-weight front is still lagging behind on this kind of model.
  • Tepix 4 hours ago
    It's cool tech and I will give it a try. I will probably make a 8-bit-quant instead of the 4-bit which should be easy with the provided script.

    That said, I found the example telling:

    Input: “Can you guarantee that the replacement part will be shipped tomorrow?”:

    Reponse with prompt: “I can’t promise a specific time, but we’ll do our best to get it out tomorrow. It’s one of the top priorities, so yes, we’ll try to get it done as soon as possible and ship it first thing in the morning.”

    It's not surprising that people have little interest in talking to AI if they're being lied to.

    PS: Is it just me or are we seing AI generated copy everywhere? I just hope the general talking style will not drift towards this style. I don't like it one bit.

    • mft_ 1 hour ago
      > It's not surprising that people have little interest in talking to AI if they're being lied to.

      I read that and it sounds like the typical nonsense script that customer service agents the world over use to promise-not-promise and defuse a customer's frustration.

      Is AI the one lying, or is it just mimicking what passes for customer service in our approaching-dystopian world these days?

    • lynx97 1 hour ago
      Do you suggest there is a difference when you talk to a human employee? Telling a customer the plain truth isn't really what your employer wants, and might get you fired.
    • esseph 3 hours ago
      > Is it just me or are we seing AI generated copy everywhere?

      The cost to do so is practically zero. I'm not sure why anyone is surprised at all by this outcome.

  • pothamk 4 hours ago
    What’s interesting about full-duplex speech systems isn’t just the model itself, but the pipeline latency.

    Even if each component is fast individually, the chain of audio capture → feature extraction → inference → decoding → synthesis can quickly add noticeable delay.

    Getting that entire loop under ~200–300ms is usually what makes the interaction start to feel conversational instead of “assistant-like”.

    • sigmoid10 4 hours ago
      That's why this model and all the other ones serious about realtime speech don't use such a pipeline and instead process raw audio. The most realistic approach is probably a government mandated, real name online identity verification system, and that comes with its very own set of fundamental issues. You can't have the freedom of the web and the accountability of the physical world at the same time.
      • exe34 3 hours ago
        this is amazing - it reminds me of the time when LLM precursors were able to babble in coherent English, but would just write nonsense.
  • nicktikhonov 2 hours ago
    From what I've seen, it's really easy to get PersonaPlex stuck in a death spiral - talking to itself, stuttering and descending deeper and deeper into total nonsense. Useless for any production use case. But I think this kind of end-to-end model is needed to correctly model conversations. STT/TTS compresses a lot of information - tone, timing, emotion out of the input data to the model, so it seems obvious that the results will always be somewhat robotic. Excited to see the next iteration of these models!
  • api 1 hour ago
    How close are we to the Star Trek universal translator?
    • ilaksh 1 hour ago
      Different type of model but you can buy those on Amazon etc.
  • khalic 3 hours ago
    ugh, qwen, I wish they'd use an open data model for this kind of projects
  • octoclaw 2 hours ago
    [dead]