Turns out you can send your mates invitations to do beta testing on your Skill/App before you have submitted it for certification.
(See the previous post ref developing a Skill).
The development from the previous post was all done just using a PC, no device available to me at that point.
I now have an Amazon Echo. Once I hooked it up to my wifi using the Android Alexa app, then my Skills in development just appeared on the [All Skills] tab in the Android Alex app. A key point here, is that I have not got to the Certification steps, proving that as long as you are working locally, you can test on a real device.
Now to be awkward (with myself doh), I called the Skill one thing (GovanBuses), and the invocation name of the Skill another thing (StuffOnAShip). Consequently, I was dumbly asking Alexa to “Ask GovanBuses…”, when I should have been saying “Ask StuffOnAShip…”. It is the invocation name that Alexa recognises. Moral: the name of the Skill and its invocation name should probably be the same, unless there is a very good reason to do otherwise. I cannot think what that good reason might be.
By the way: don’t be looking for anything functionally useful in all this. I have taken some inspiration from the Amazon tutorial, and some nods to my intended use-case… more of which in the near future. But the combination right now just proves that I can talk to devices and endpoints, QED.
The tutorials from Amazon themselves are great – no need for third parties, imo. That said, you slightly need your wits about you as the churn in the UI seems pretty frequent.
And this one is very good on Intents, Utterances, Slots, and Slot Types.
This is what I have and have not, thus far:
- a working Skill, in that it takes an Utterance (in the UI emulator right now, rather than the VUI), …
- passes it to an endpoint on AWS, referencing the required lambda function
- the “server” calls the lambda function on behalf of the skill, and responds
- the client side (the UI emulator) receives the response, and utters it (well, renders it, and then for now I press the Listen button.
I have a no-brand action camera, HD. I wear it on my bicycle helmet. 10 minutes of HD video consumes about 1GB, and therefore an hour (a typical duration) is about 6GB, which will not fit onto a DVD, for those rides when I want to keep the footage.
I used my own Handbrake wrapper to convert from the native .mov from the camera, to .mp4. Astonishingly to me, the output was way bigger than the input. I then played with reducing the framerate, and then turning on a ‘for the web’ switch in the Handbrake UI.
In summary, it can do only so much when trying to strike a balance between file size and quality. So on this occasion I’m giving up.
Decided I would go this way. FileUtility is the start… need to get the other utilities aligned with that.
This is firstly a test post, using Chromium on Linux, specifically Zorin. Although 30 (ahem) years ago I might have considered myself a Unix whizz… well I’ve forgotten a lot of what I knew. It’s sitting there, but needs to be teased out. And of course at that time there was no GUI to speak off.
The very first positive thing I note is that speed of bootup (I have installed native rather than live) compared to Windows (7 in this case). The mechanical disk in the old laptop I am using really struggles to return the command prompt in less than say 5 minutes on Windows, whereas having wiped Windows and replaced with Zorin, I would say it is close to 1 minute. It’s entirely my impression and anecdotal.
Apart from installing LastPass for Linux, my first non-admin task is to install Android Studio. Instructions for Linux are here.