We have a nice blog post on this where we benchmarked our stack against both of them ( https://mycroft.ai/blog/the-mycroft-benchmark/ ) the answer was - we were tested and found wanting. This was great! This was our first benchmark and gives us a baseline to compare against. The team is now working to improve our performance and bring it up to par.
Our first production version will be released in February 2019. At that time I expect you'll see similar performance across the top 10 skills. We're certainly working hard to make that happen.
AFAIK, they have a checkbox on your settings page where you can opt-in in donate your voice. So every time you say "hey, mycroft" it sends and records the next few seconds. If you didn't opted-in, the voice is discarded just after it's converted to text.
They also collect your IP address to send back the information to your mycroft. I think they save no logs of that either. There are, but, some information they retain, like your city, to query your default city on the weather-skill. I'm pretty sure you can leave all this personal information in blank, and configure it in your local JSON file, but that's not so user friendly. There are many skills which needs some personal information from you (spotify account, google account, etc) to interact with those services. Many of them can be also configured through the JSON file as well, because Mycroft community members are pretty concerned about putting any data on mycroft.ai site at all.
Mycroft is designed to be modular in that respect. By default it uses Google but there are other third-party systems you can plug in. It's a necessity at the moment. The reason being as LoonyGnoll mentioned is that it requires a massive sample size to train and Google has been at this for years using Google Voice's voicemail transcribing. That's why Google is so good at voice transcription.
Part of the Mycroft project is OpenSTT but because of how long it could take to get a reliable STT model they're launching without it. IIRC when you use Mycroft it currently gets the STT results from the third party service then data is also sent to the OpenSTT project to help train their voice model. At least that was a proposed plan years ago when I started following Mycroft when it was just in concept. It might have changed.
Anyway, setting up and playing with Mycroft is pretty simple. I have a Mycroft VM going and it works quite well, but last time I fired it up there was a lack of useful "skills" (what Mycroft calls thing it can recognize and return). Someday I plan to start writing my own.
More reading:
https://mycroft.ai/documentation/mycroft-software-hardware/#speech-to-text-stt
https://openstt.org/
I am waiting for the release of the Mycroft assistant. I was hoping for the December release, but they are still working out some issues. The Mycroft was designed from the ground up to be private.
Sure, but you'll need to hack around with the message bus to make it work. Docs are here: https://mycroft.ai/documentation/
Sounds like a good weekend project.
Mycroft is always one of my favorite opensource projects!
I see so much potential with voice user interfaces and ai chat, that it's exciting to have one to actually tinker with!
Main website Github r/mycroftai
Mark 1 is $179.99, Mark 2 is $189.00, available for preorder, and should be out next month(December 2018), challenge coins are $55, and you can also do a monthly donation if you so choose.
It's the first time I've heard of both of these devices, but I can advise you to look at Mycroft.
I think Mycroft is the closest to a fully featured voice assistant that is open source and privacy respecting. They do have plans or maybe its already possible to run your own backend locally and not need internet access at all. I also believe they are using Mozilla's project common voice for training as well as community opt in samples.
I don't know how privacy oriented you are or how much you want to work with self built things.
But if you want, you can install mycroft on a raspberry pi or similar instead of buying an alexa. Simply connect it to some speakers and a mic and you should be set. Integrates itself nicely with everything as well! :)
Personal assistants are the future of computing. If we don't put a great effort behind it now, we will lose the battle for control over the devices controlling peoples lives definitively to the big corporations. This is terrifying (ref Homo Deus by Harari).
There is one ongoing effort: Mycroft. I think anyone who has the money or resources to support this effort should do it!
We built a tool called Persona to handle this. Missed intents and conversational gambits are fed into it and our community will soon be allowed to resolve these queries. The resulting text is fed into a Machine Learning algorithm that is then responsible for responding to missed intents and engaging in conversation. I wrote a pretty extensive blog post on how this is intended to work: https://mycroft.ai/blog/building-strong-ai-strategy/
That blog post is basically a step by step procedure for building an AI that can beat Turing.
Mycroft is in development, they met their Kickstarter goals, and have started to fulfill some of their rewards. They sent me a sdcard to put into a Raspberry Pi, I just haven't had time to sit down and play with it.
Actually, MyCroft appears to have found a reasonable work around on this that you could probably integrate into Stephanie.
Basically it looks like they have a two-tiered listening system: one that just waits for the wake word and relies on a local recogniser and another that processes arbitrary audio in the cloud.
This way you end up with only intended commands going to the cloud, which is far less invasive, especially when the actions of those commands tend to result in API calls anyway.
To be clear I'm not ragging on your project as it's excellent work. I just think we can do better than giving everything to Google when we don't need to.
Common Voice is just building a data set that anyone can use, it's not an app or anything. The problem with local voice recognition is that it's incredibly computationally expensive, even given how fast computers have gotten it's just not possible to do voice recognition at the level of accuracy of a cloud-based service on a regular computer. Not that people aren't trying, though.
If you're really interested I'd recommend checking out Mycroft.ai and openstt. They're notbuilding local solutions but since they're open source projects that's where you're most likely to find people who 1. are also interested in/working on local solutions, 2. know what they're talking about, and 3. can/will talk about it publicly.
Supported languages: We only officially support English at the moment, because language support needs to be implemented in every layer of the stack to be effective. We have some docs available that show how to change the Wake Word and some of the parts of the stack into other languages. https://mycroft.ai/documentation/language-support/portuguese/ (shoutout to @JarbasAI who wrote a lot of that).
Google is currently the default STT, but this can be swapped out. I don't know how Google works internally (cough something something proprietary something), but I presume they may be recording some things. We use PocketSphinx and Precise as our Wake Word sottware, so it's the stuff after the Wake Word that goes to Google for STT, if that helps.
What happens when teh intarnetz are down? I live in Australia and we use this incredibly advanced technology called copper so I wouldn't know about the internet being down </snark> :P. In all seriousness you currently need an active internet connection for two things:
Google STT - we want to swap this for DeepSpeech/Openvoice, then decouple it from needing the internet. This is a few months away.
home.mycroft.ai - this is where we abstract away API calls to other services, and why you need to register for an account. We "umbrella" a number of individual API calls to services like Wikipedia and Wolfram this way. It's possible to decouple from this, but it means we can't then abstract away third party services - you will need your own API keys.
Great questions, keep 'em coming!
You looking for a sort of AI to communicate ?
The voice-based stuff is just speech to text (stt). Behind the front-ends they are all just still text based.
If something has open APIs were you can send text instead of using a STT front-end then you should be able to get what you want.
I haven't tried them personally, but two open source projects I know of is Mycroft and Jarvis
Not really. There's a lot of different things that go in to this and what you do depends on how much you're willing to sacrifice. Increasing privacy and security almost always comes with decreasing convenience. For example, I do not use SMS. If somebody I know wants to "text" me, they can use Signal or Threema. I also won't converse with anybody over a cellular connection unless it's an emergency. This is extremely inconvenient both for me and for some people who want to keep in touch with me, but I consider this to be worthwhile.
There's a lot of stuff you can do to enhance your privacy. The easiest thing I'd recommend to everybody is to replace your Alexa/Google home with Mycroft. I don't go nearly as far as some people though. For example, Snowden says don't use wifi or 4G. He plugs an ethernet cable into an adapter for his phone.
I love that we now have multiple efforts towards these kinds of technologies, there's this one, Mycroft which also does hardware, and Jasper.
Soon enough my finances will recover again enough to be able to build one of these things and actually start working towards having my own JARVIS without having to settle for a 1984 telescreen.
That's just automating a few functions, which you can do yourself with an arduino board or similar if you've got the technical ability.
If you don't, For dedicated easy use smarthome automation I've only heard of one open hardware solution - mycroft which is supposedly privacy respecting.
It's open source but I haven't really looked over it in depth.
TL;DR No.
The Mycroft software runs on a bunch of platforms - RPi, Linux, etc. We need some sort of common glue for the Skills otherwise we would have to support other stacks on top of these platforms (ie cpp, PHP, Ruby, Go, node.js, whatevs). We can't stretch that far at the moment, we're a small startup.
No ETA in the last update: https://mycroft.ai/blog/mark-ii-manufacturing-and-product-update-may-2021/
If you've been following for a while, they had a big falling out with their original manufacturer and had to go back to the drawing board. A complete redesign is underway. Dev kits have shipped, but no production details yet. (unless someone from Mycroft can correct me)
There have been some initial efforts by the community to build Mycroft for Android; these efforts are in their infancy but are slowly progressing. You can learn more here; https://mycroft.ai/documentation/android/
Off-the-shelf voice assistants like Alexa and Google Home have a dedicated circuit that just listens for the keyword, in order to conserve power and to not have to upload a 24/7 recording of your living room for central processing. So they can only use a few predefined keywords.
I'm sure some of the DIY voice assistants can support other keywords. There's one called Mycroft. I haven't tried it though.
https://mycroft.ai/get-started/
https://mycroft-ai.gitbook.io/docs/using-mycroft-ai/customizations/wake-word
Short term solution: Change your settings back to GPM until they roll out the ability to play uploaded library (not holding my breath).
Long term solution: Setup Plex with Phlex or Home Assistant. Also buy a MyCroft since Google is now evil.
https://github.com/maykar/plex_assistant
Easiest solution: Buy an Echo.
Interesting I'll check out your system. Incidentally there's a 404 link to List of Community-Contributed Skills from https://mycroft.ai/get-started/
​
I guess I didn't phrase my question well. Sorry. Let me restate: If your entire company were purchased by IBM or Facebook or Binladin Group or ... whatever, do you have consumer privacy protection built into your back end system ?
Looks good - can you provide a link to the Open Source aspect of the project please?
​
Being able to run something like mycroft.ai on a small device like this would be a really cool project for my kids...
Yes... originally. However you can plug it into Mozilla's dataset, or into Mycroft's own dataset (which is anonymized), or you can download their dataset and plug it into your own server and not go out to a third party at all.
But, hey, allow me to insist: Got to the AMA and ask there. They have more info than I do.
I know that there is a project called Mycroft that is working on an open source alternative. They have the open source code so that you can build your own and they also a ready made device that you can buy.
It had to get redesigned but is definitely coming - we've got the Dev Kits out in the wild and setting up mass production - monthly updates available on our blog if you're interested:
https://mycroft.ai/blog/
This project helps privacy focused project like mycroft.ai keep competing. you can't rule out nefarious use for any technology. Companies like amazon and google can just buy speech data but small open source project usually can't
Hi there! Kathy from Mycroft here, I'm leading a lot of our Languages efforts. We initially covered some of our thoughts on why language support is so hard, and since launching Mycroft Translate we've learnt some more nuances as well;
formality vs informality - different languages use different phrasing depending on whether formal or informal language is being used. We're not sure how to handle this with Mycroft yet, but it is likely we will approach it programmatically so it can tie in with Persona
- for example, the user should be able to configure whether the assistant personality is formal or informal.
gender - Our CTO and all-round Top Bloke Steve Penrod has covered this in an earlier thread, but different languages handle gender and phrasing for gender differently. Again, this is something that we need to handle programmatically. In some languages like Russian and Welsh, whole verb forms change depending on gender.
numbers and counting - some languages handle plurals in really interesting ways - such as "none, some, many" rather than specific numbers. So we need to figure out how we tackle this.
Languages are fascinating because they're such a reflection of culture and cultural norms as well.
Bias is always a concern in machine learning systems. One key is to create a channel that is capable of bringing in more diverse data. That is what we and Mozilla are both working on with our efforts:
https://mycroft.ai/voice-mycroft-ai/
"From scratch" is a big project. What I'd recommend you consider instead is to make a contribution to an existing TTS project. Mycroft's Mimic leaps to mind: Mycroft is an audacious project to make an open source virtual assistant platform and is (mostly) written in Python. https://mycroft.ai/documentation/mimic/ -- scoping your project to contribute something small but meaningful to an existing project can be extremely rewarding and measurable.
You might be intrested in looking at Mycoft being a fully opensource voice assistant that can run on the raspberry pi. They developed/use mimic as their text to speech engine which seems to be based on festival-lite.
We also have a newsletter, but it covers more things like new Skills development, compliments and complaints etc. It's more "marketing-ish" but might be of interest. Subscribe link is in the footer of https://mycroft.ai. We also have a really active Chat community, there's usually a Mycroft-er around because we're spread around the world in different timezones. https://chat.mycroft.ai
Best, Kathy
Hey there! I'm Kathy from Mycroft.AI. Feel free to join us on r/MycroftAI, or head over to https://mycroft.ai to learn more. We're also running a Mattermost channel at https://chat.mycroft.ai. Also feel free to AMA about Mycroft.AI. Fairly technical, I use Ubuntu 16.04 here.
This is a huge project. From the wikipedia page on intelligent personal assistants there are two open source projects that are trying to do what you want to do, Sirius and Mycroft.
I would start by looking through how those projects are structured and perhaps think about using some of the libraries they provide.
one reason more to contribute to this project. This Project makes projects like mycroft.ai possible (a voice assistant that doesn't send your voice recordings to Amazon/Google like Alexa and Google Home do)
Can't alexa be setup so that it only listens when you actually push a button? I know on the FireTV it has a mic on the remote and will only listen when the mic button is pressed. Maybe get a FireTV instead?
Or Maybe get a Mycroft(https://mycroft.ai/get-mycroft/) based device and see if its a suitable replacement?
We only officially support English at the moment, because language support needs to be implemented in every layer of the stack to be effective. We have some docs available that show how to change the Wake Word and some of the parts of the stack into other languages. https://mycroft.ai/documentation/language-support/portuguese/ (shoutout to /u/JarbasAI who wrote a lot of that).
Check out the Mycroft project. They are an entirely open source platform that is trying to build their own platform but keep it free and open. They even sell a hardware device that uses easily available parts (like a raspberry pi for the core). The system does work but it is early yet so they don't offer a lot of functionality as yet, but they are making steady progress.
I would strongly suggest rather than trying to build your own from scratch that you instead contribute to open platforms like that so that everyone in the world could someday benefit.
Have you heard of MyCroft.ai? I am a contributor in there and it think this project is super cool! We should collaborate! I think this technology being open source and promotes privacy is super important. In order for this to go well we all need to work together to take some of the market away from google, Microsoft, and amazon.
so, just a few thoughts because i like the idea.
the first is that there's already something similar used in scba masks (for firefighting, or similar.). this is basically a repeater system, and they're fairly bulky and sound like poop, but they work. don't know how useful they'd be, but it might be something worth a look at.
secondly, it might be easier to bluetooth to a phone that they can read, instead. this would provide a few things- one, the phone can handle the speech-to-text (or perhaps more likely, handle going out to something like amazon alexa, siri, or, my favorite, Mycroft.) and they all already have built in displays- and as far as a display on a mask goes, they're heavy, rigid and you'd have to figure out some way breathing around it. (take a look at the rubber filter-masks for things like particulars and such like.)
finally, your competing against pen and paper/ tapping it out on a phone. if it's not as simple and faster than that it's won't go very far.
one thought would be to set up a Pycroft (raspberry pi running an instance if mycroft. this is why i use in my home, and it's pretty reliable - though mycroft does take a lot of fiddling. it already has speech recognition and all that.)
p.s. i hope your friend's surgery goes smoothly. those implants are life changing. (though, there was a kid at school who had gotten one.... apparently the first thing he heard was his dad farting.)
cool projects that exist because of this dataset:
- mycroft.ai (a privacy focused Amazon Alexa alternative)
- DeepSpeech https://github.com/mozilla/DeepSpeech (a open neural network for speech recogntion)
those things work better, the more data you can give it
Of course it doesn't, and at this point it's simply out of the reach for non technically-savvy consumers as well. (Though that'll hopefully change with the release of Mycroft Mark-II, which is like an Alexa device out of the box).
The cost of privacy.
Mycroft is an active well-supported FOSS voice assistant project that's worth checking out. Unfortunately, Android implementation is still very preliminary, not even close to being ready for general use (they focused initially on PC/laptop, RPis, and dedicated devices). Still worth knowing that someone is, at least, trying.
For home automation, look into MyCroft, it's an open-source voice assistance similar to Alexa that you can host on a Pi. You'll probably want that separate though, as the Pi is relatively low-powered.
For project development, why not look into hosting a git server? You can either run your own spin of it, which will be more barebones and lightweght, or look into GitLab which includes things like automated DevOps pipelines and CI/CD. That way you can write code, and it will automatically be tested, built, and deployed if you so choose.
Last note, with a Raspberry Pi why not just use Raspbian Lite? It's very barebones and you don't have to strip anything down, just remove the standard pi user and you're good to go.
Internet connected / smart / automatic " "
Fill in the blank, for example, I have a RasPi and some 1-wire probes wired into my AC, so I can monitor it's in/out temps, freon temps, and duty cycle.
All those IoT devices that spy on you and monitise your existence, you can implement on your own and keep your data with Linux. For example, mycroft as an alternative to Alexa.
​
Checkout Mycroft. It is an open source assistant. It looks like they have RPi builds. Haven't given it a shot yet but it looks promising especially if you have a concern with privacy.
Edit: may have to retract that last sentence. You still need to create an account. Don't know if you can host you're own server though.
Thing is, you agree to that when you install it. That's the trade-off that you accept to have things such as live traffic reports and routing. I'm a big advocate of privacy. I'm also realistic. I have a cell phone. I use Waze. I spend far too much time on Reddit. These technologies have been transformative in my life (though Reddit has been a net negative insofar as productivity is concerned). I accept these improvements at the cost of giving up some level of privacy.
I've decided that the convenience of not getting lost or severely delayed in traffic is worth sharing my location with Google. Even if I'm not using the app, since the effectiveness of the app is the only way for it to reliably determine changing traffic conditions.
A cell phone simply cannot work without geographic tracking at a minimal level.
I don't really have a good excuse for Reddit and a quick perusal of my comment history will give you a pretty good idea about all sorts of salacious details about me.
For me personally, I draw the line at being listened to. For some people, having the ability to use a command word to play Darude Sandstorm is enough. For the rest, there's r/privacy.
Oh, and Mycroft.
There will always be some parts of the stack that are dependent on the internet, because an assistant is made powerful - and useful - through the use of third party APIs for information. That said, we're taking initial steps to have a standalone architecture. https://mycroft.ai/blog/mycroft-personal-server-conversation/
Yeah, great question. We uniquely positioned for devices like cars because many car manufacturers will not want their users' data locked up with the giants like Google and Amazon. One of the challenges we need to address before putting Mycroft into cars is to have an entirely standalone architecture available - one that is not dependent on the cloud. We've taken some first steps in this direction which you can read more about at; https://mycroft.ai/blog/mycroft-personal-server-conversation/
Hi there! The Mycroft for Android project link is community owned / driven - so the progress is really dependent on community efforts. We have a chatroom for Android for anyone interested.
Eh, they are still bad for privacy.
Whether or not data is sent to advertisers, it is still possible for a third party to turn one into a wiretap. However, third parties could also compromise his other (IoT) smart home stuff, as well as his tablets, phones, and computers depending on what he has.
See if he'd be interested in a Mycroft. I'm sure that could be connected to at least some of his IoT stuff, and it's confirmed privacy-friendly because it's all open source.
Alors qu'il y a un projet vachement cool en open source qui existe :
https://www.kickstarter.com/projects/aiforeveryone/mycroft-mark-ii-the-open-voice-assistant
Après, j'ai toujours pas trouvé quel utilité je pourrais avoir de ce genre de chose, du coup je passe mon chemin…
There's a few projects that aim to support that use-case:
I haven't tried any personally, but I believe Jasper and Mycroft are targeted to run on Raspberry Pi.
So, there's a couple of different ways they can be compared. One is the number of Skills / Intents available, and the other is how well the Intents are 'matched' (slot-matching). We haven't benchmarked agains Snips.AI (yet), but you can see our Skills here: https://mycroft.ai/documentation/skills
The short answer is that you can run your Skill locally, so no, you don't need to expose your services to the internet. The Skill itself may need to make calls to api.mycroft.ai, but your services shouldn't need to be exposed.
Have a look at our Skills development documentation here; https://mycroft.ai/documentation/skills/developing-skills/
The Skill is installed locally at /opt/mycroft/skills
- you don't have to upload it to mycroft.ai.
Best, Kathy
Yeah I'm not keen on bugging my house for some 3rd parties either.
They are all in the early stages of development but their are projects to do this. The most interesting is https://mycroft.ai/ which you can apparently self host. No experience with it.
You might want to follow the https://mycroft.ai/ project.
They are making a Siri-like "AI" built on the Pi and the project is open-source from start. I havent tried anything yet but they have several parts working including TTS and a language parser.
An Arduino doesn't have the memory or processing power for voice recognition. The Mycroft software is pretty much the de facto standard for DIY voice recognition systems and will run on a Raspberry Pi 3 B or above. It's overkill for your requirement but there aren't many options for DIY voice recognition.
As far as I can tell, all of these assistants are designed to work from a central hub, desktop, or server. The Sapphire Framework is specifically designed to work on mobile devices without the need of any network connectivity.
​
Originally, this was supposed to be in conjunction with Mycroft.AI so that you could run Mycroft on Android. However there are significant differences between a Linux system and Android, which caused me to change the focus of development.
​
A secondary reason I focused on mobile devices (and hence, my own project) over server based devices is that there are many users globally who only own a single computer, and that would be their mobile device. I wanted to develop a tool that could be used by the broadest number of people, so hence I started the Sapphire Framework project
That's the part that the big vendors do in their big expensive cloud infra. Speech recognition.
As far as I know, in principle you can run the speech recognition part of Mycroft in Pi (amazing nobody mentioned it), but there will be long delays from saying the phrase and getting the results. Doesn't give a great impression. But you can also run the recognition code in a more powerful computer, if you have access to one.
Also I at one time read that Mycroft was going to release a personal server that would likely need a GPU to run. I went down this road with that in mind but I’ve seen nothing else about it since.
https://mycroft.ai/blog/mycroft-personal-server-conversation/
Hmmm.
Cool!
Most approaches I've seen kind work in a "record voice, send to [big tech], get text, process tech" approach. If you have a problem with not having privacy... that's an issue. That approach is then just as bad as using an off the shelf product.
There are tutorials online on how to get google text to speech to record your stuff and send it back to you.
Then just write a python program that listens to your microphone at all times, start it with cron on boot and you have something. (that's where the challenge starts)
Here is a privacy focused project : https://mycroft.ai/
The biggest problem is that natural language processing.
Here is a good python library, https://www.nltk.org/ that comes with a free book that explains what it does https://www.nltk.org/book/ I think it's a good book to show what's possible, what the current approach is, to some degree why that fails, and most importantly why you probably can't satisfy your fantasy.
The tldr of AI is, "basically it's all statistics" and for that you need lots of data. Not just any data though, data you process first. By Hand. The best, cutting edge stuff you can do, unsupervised learning, still requires you give the AI lots of cat pictures to train on, when you want to train it on cats. You have to make that selection first though.
That also means, anything that is individual/ actually personal to you, as "personal" assistant would imply, is hard.
General thinking is 100% unsolved. There are no approaches beyond "let's just make the neural network really big, and really fast, and use really diverse inputs, maybe it will do something".
Mycroft is working together with Jaguar and Land Rover. It is an open source, private voice assistant.
I'm pretty sure Mycroft has Jarvis as a Wake Word.
It's not Alexa but it is a voice controlled digital assistant that
First of all, Artificial Intelligence is a really broad set of strategies. Saying "should I use AI or machine learning" is kind of like saying "should I use my skills or my physical abilities". They're both really broad, AI more so than ML.
To answer your question, you might start by looking up blog posts about text classification. There are good tutorials that will show you how to read a product review and guess how many stars the person gave it, for example. Here's one to start you off, but there are lots.
You also might be interested in Mycroft. He's a chatbot platform kind of like Alexa but he has a little robot face that you could use.
To add to this, I would suggest getting in contact with the Mycroft project developers, who also use Mozilla's voice assets, among other FLOSS elements as well as adding their own. Mycroft is a Free/Libre Open Source alternative to voice/AI assistants such as Siri, Alexa, Google Assistant etc. Their voice components and for that matter their AI may be helpful for development of this new captcha alternative, audio and otherwise
He made a really good point though.
If you really wanna avoid tracking on your smart speaker/ home automation use https://mycroft.ai/ and https://www.home-assistant.io/
Echo dots phone home a lot, it shouldn't really be a surprise. It's Amazon.... tracking you is part of their business model. Even with Pihole, I really doubt that you can completely turn off tracking without losing most features on the echo dot.
Hi Spiral,
One of the limitations is that sudo is required for many functions, if you opt to require a password for sudo during setup then you need to manually enter this on each boot.
I have added some additional text to warn of this during the setup wizard though not sure if it's made it into production yet.
It's also worth noting that the community identified a number of fixes for AIY and a new stable image was released over the weekend to incorporate these (amongst other things). So it is probably worth grabbing the latest image dated 2019-07-20.
I don't think it's been "officially" discontinued, but Hey Athena hasn't been updated since 2016. Some other open source virtual assistants you may want to look into are Mycroft, Kalliope, and Dragonfire.
I haven't tested it myself, and I can't validate if this information is still accurate.
Skimming through mycroft.ai blog posts, they were 90% accurate in 2018, and they are still collecting and training it to improve this.
I am sure someone on their Mattermost knows more.
It's unfortunate that it takes time to make something like this, but the bigger companies get a lot more data and have a lot more developers.
Hi there,
If you have not received a response to an email, I certainly want to track that down and ensure you get a reply. I'll send you a PM to get more details.
In terms of the Mark II, we are open and honest about the challenges we have faced in its development, and what our steps are going forward. Every day the Mycroft Team and the broader Community make progress on both the hardware and software and we will continue to publish details on these improvements.
Big announcements will be posted here, but I don't want to spam the sub with every blog post, so if you want to keep track of all progress updates, please subscribe to our newsletter at the bottom of mycroft.ai.
I know it can be frustrating, though I assure you the team are working tirelessly to get the Mark II refined, produced and into your hands. Thanks to all our Backers and Community members for being a part of this journey with us.
SURPRISE! Yes!
FORTUNATELY for me I bought a hubitat although I've slacked since getting it HOWEVER TOMORROW MORNING so long wink ya useless POS!
[EDIT]
BOGGLES my mind WHY they INSIST on cloud BS when they DO NOT MONETIZE it! It just shafts customers when it INEVITABLY GOES DOWN.
OTOH IF the HAD MONETIZED it I would've been gone even longer ago, or more likely NEVER purchased one.
I guess the answer is that putting your shit in a shittily flakey cloud is cool and shit versus a RELIABLE product... I wonder how much longer before their bankruptcy filing...
[/EDIT]
[EDIT2]
On a slightly more reliable note but not wanting Amazon/Google collecting any more info than they have I'm also going to try out a mycroft setup or in my case picroft. https://mycroft.ai/
Problem will be not even hubitat apparently has support for it ATM, OTOH maybe the hack project that I need after I wrangle hubitat into action and build a picroft... (the prebuilt mycroft's are assininely expensive for what they are...)
[/EDIT2]
[EDIT3]
At this point in time we need a wink deathwatch megathread, but hey just my opinion as I can't see them surviving these CONSTANT outages...
[/EDIT3]
ctr+f mycroft, 1 result. Why am I not surprised.
I don't use mycroft, but I feel like I want to use it once it'S gotten better, so I just bought the annual donation thingy: https://mycroft.ai/product/mycroft-supporter/. Just imagine how much more resources they had if only half the people who complain about proprietary home assistants collecting their private data did the same.
Because you need glasses. ;)
​
Seriously though, the phonetic similarity is purely coincidental. You can read this blog post for more information on why we named the company Mycroft. https://mycroft.ai/blog/history-of-mycroft-origin-story/
>This blog post on our website explains the origin of our company name, among other things. The phonetic similarity to Microsoft is nothing more than coincidence.
>
>https://mycroft.ai/blog/history-of-mycroft-origin-story/
​
This blog post on our website explains the origin of our company name, among other things. The phonetic similarity to Microsoft is nothing more than coincidence. https://mycroft.ai/blog/history-of-mycroft-origin-story/
Our skills are all written in Python. You could look at the mycroft-skills repo in GitHub to get an idea of what skills look like. https://github.com/MycroftAI/mycroft-skills
We also have documentation on our website that will give you some guidance: https://mycroft.ai/documentation/
mimic
is already available as a standalone piece of software - so if you wanted to integrate it into a screen reader, you could. Basically, mimic
just needs to be fed a text file and it will read it back. Different formats, like PDFs, are likely to be a lot trickier.
https://mycroft.ai/documentation/mimic
At the moment we don't have Sonos support, but it's something the Sonos community have requested too; https://en.community.sonos.com/music-culture-the-industry-228997/mycroft-integration-6800241
Our Skills framework is really open for anyone who has some Python programming too - https://mycroft.ai/documentation/skills/introduction-developing-skills/
Great suggestion. Our Skills Development process at the moment is definitely not as streamlined as it could be, however we have just released the Mycroft Skills Kit - msk
- which makes things a lot easier.
https://mycroft.ai/documentation/skills/msk/
This is a great question.
For people with a speech impediment, there are some barriers to using voice assistants, with the primary one being speech recognition - the Speech to Text Layer of a Voice Stack. The way this is likely to be solved in the future is through "training" the Speech to Text layer to recognise a specific individual's voice - although this will require a fair amount of training - just like say Dragon Naturally Speaking needs a lot of training to get "used" to an individual's voice. So no, not at the moment, but with the advances in machine learning happening so rapidly, it won't be far away - think years rather than decades.
The other benefit we can see here is in creating a voice. For those with progressive conditions such as ALS or MS, they may eventually lose the power of speech, and the ability to "record" a voice before it is lost will likely be something that is valued.
That's really one of the great "positive" use cases for AI and machine learning - to be able to provide humanitarian benefits.
Hi there, Kathy from Mycroft here. We're an open-source, privacy-focussed company that makes a voice assistant. Your transcriptions are anonymised, and your utterances are purely opt-in for our open dataset. Have a look at; https://mycroft.ai
Hey maybe it has already been mentioned but what about alternatives like the open source assistant Mycroft. You could buy one of their devices or build your own, I think the lights switching should also work with it.
The device is entirely backed by googles back end services and lock ins. It's not the device you would need to compromise, it's Google itself, which is where the gatekeeping is happening.
If you want an open smart speaker, this might be a good place to start: https://mycroft.ai/
Let me advertise something I really, really like:
Mycroft, the fully opensource voice assistant:
OpenSource means that you, or you friendly programmer, can check its source out to see if its leaking anything anywhere. It also means that you can develop it further.
The berry on top of cake is that you can install that on your local computer or on Raspberry Pi (which is ~20$) to do the voice assisting for you, for 0 costs except for costs of your hardware.
This is something we have planned for the next few months. Our language support is currently experimental and we have more information available on German here.
Check out Mycroft AI, it's an open source personal assistant to replace Alexa/google home. It can interface with Home Assistant so you still have full control over your house, but you won't have to worry about the privacy problems that closed source assistants introduce.
All done. I mentioned this project in the survey. https://mycroft.ai/ I am not associated with this project but it has always peaked my interest. I am hoping to pick up their second gen assistant in the near future. It is open-source centric so it might be worth looking at as a comparison to the big tech companies.
I know what wireshark is, and I would never purchase an alexa/google home/etc to bother testing them. Even if it only sends data when I tell it to, I wouldn't trust it to only send my queries.
If I truly felt like these products would fulfill any purpose at all, I would use https://mycroft.ai/ because it's FOSS and I can actually know what it's doing in it's entirety.
I run Mycroft on mine without a monitor, and it does just fine.
You will probably need a monitor to get it setup, but after that you should be fine, you can test it before you give it.
Hi there, Kathy here from https://mycroft.ai.
Happy to answer any questions you might have about our product.
Jasper development has pretty much stopped; we're shipping new releases every fortnight.
No, if the software and hardware works like intended it shouldn't. Both Alexa and Google Home wait until they detects the activation word (which is detected offline) until it sends any kind of data to the servers.
This shouldn't stop you from building your own personal assistant though.
Hi, it's Kathy here from Mycroft.AI. Happy to answer any questions you might have about our hardware and software. We are actively developing, and releasing updated versions every fortnight - and each version keeps getting better and better. Of course, we don't have the resources of a Google or an Amazon behind us, but for a small open source player we think we're doing pretty well.
We do have a Mycroft image for RPi, and you can find it here: https://mycroft.ai/get-mycroft/
We're also going to be partnering with Mozilla Deep Speech to leverage their Speech to Text, and we have an open Skills Development framework. OAuth will be released shortly so that developers can connect third party APIs that rely on OAuth into Mycroft.
Sing out if I can provide further information. Best, Kathy
Hey @universaljester, have an answer for you on this one, sorry it took a couple of days to come back to you.
The keyserver issue is from a really old version of Picroft, so our advice is to download the latest image from https://mycroft.ai/to/picroft-image and burn that to your SD card using Etcher.
but there is a nasty bug in the current Picroft, 0.9.10, which causes processes to hang and overheats the RPi. Hold off until 0.9.11, which should drop in the next 24-36 hours.
Best, Kathy