Week 3 - May 25 - 29

Week 3, whew! We all know everyone has their struggles and well.. I hit mine, big time, this week. Admittingly (and this is important for growth), this week was a tough one. Remember when I mentioned that I was new to Node.js, yeah, well, I still am. I’ll elaborate more on that further down this blog. But to open up, we started the week with a meeting to create a game plan for the week which consisted of getting WebRTC on to our open - sandbox server, figure out how to get GCP and IBM Watson to display transcripts in a VTT file output for live transcription, and connect the 3 ASRS to the WebRTC we’ll set up in the server.

After the meeting, I got started to installing WebRTC on the open server. You see, we already had a server - Gallaudet University’s tap server that we use to code, test, and play around with WebRTC. However, Dr. Raja decided to build a sandbox server - open - specifically for the 3 ASRs and to test them before we insert the coding into WebRTC on the tap server. So with the new sandbox server, we needed to also have WebRTC installed so we could understand directly how to add the ASRs to the code and see how it works before going to the tap server. The installment of WebRTC did not take long at all, all I really needed to do was to copy the codes from the tap server and adjust some of the node.js coding to the IP address of the open server instead of the tap server. So that was checked off the list in a jiffy. Fabulous, the week was off to a great start.

I then went to picking up my work from last week- getting IBM Watson to print to a VTT file instead of a text file and Mike did the same for GCP. After a couple of hours of researching and scratching my head, my first brick wall of the week, I thought, why don’t I try to set up a test room of each ASR in WebRTC and have them run a live transcript on an audio file instead of having them print to a VTT file then uploading THAT to the webpage. I conferred with my team and they agreed so I decided to go at MS Azure first since the node.js coding was the most advanced one and it could already print to VTT file while Mike kept on working on GCP. I went on the internet to see if there was a demo or sample using MS Azure’s Speech to Text service of an audio file in a browser and lucky for me, I found one! Okay, with the WebRTC and finding this demo - I thought wow this is going to be a good week. I downloaded the demo and edited the code to match up with WebRTC and implemented it into the MS Azure testroom in WebRTC. I had the page be able to recieve a subscription key, region, language, and an audio file then the page would show a live transcript of the audio. Perfect, MS Azure checked.

I moved on to getting IBM Watson up in its own test room. I started with, of course, the internet and I found a demo, nice. I got it set up in the IBM Watson test room page but then I hit a block. I could not get the demo up and running. I tried on the open server which does not have a HTTPS certifcate so I thought that might’ve been the issue so I tried the tap server which does have a HTTPS certifcate but I couldn’t get that running as well. So I decided to took another look at the errors and did some more research. I realized that the error had something to do with the authentication - the apikey and url required for IBM Watson’s Speech to Text service. But being a new node.js user, I couldn’t truly troubleshoot the error. I consulted to Dr. Raja and he told me to put that on hold because Norman - who knows more about node.js and how it works with the browser and all - isn’t available until next week and he could be more of an help. Dr. Raja then told me to go back to MS Azure because it was the only ASR that was up and running in its test room and see if I could get it to connect to WebRTC ‘s live streaming of a video and display the live transcript on the side using its Speech to Text service.

So I went back to MS Azure but little did I know, an even bigger block was coming. Because, MS Azure also required authenication, and I had that set up by inserting the subscription key and region on the webpage but I need it to be automatically loaded as the page started so I could run it when the live streaming of the video starts instead of manually inserting it. The demo showed a way I could do that by using a PHP script but WebRTC uses node.js not PHP so I had to adjust the code to node.js and have the server side contain the subscription key and region or the key would be exposed to the public in the source code. BUT, again, I am new to node.js so I could not wrap my head around the code and get it to work. After spending quite some time on browsing the net and conferring with my team, we came to the conclusion that Mike and I needed to go back to the basics and really focus on understanding node.js and how it works with the browser with client and server side scripting. As Mike was also struggling with getting GCP set up.

With all of that, we, well I ended the week with an irritated and frustrated feeling of not being able to fully understand node.js and how it works with the browser enough to troubleshoot our errors. After a team meeting, we decided that we will start next week with a focus on going back to the basics and getting familiar with node.js (client/server scripting) so that we can get MS Azure ASR connected to WebRTC video chat to show live transcription on the side by the end of the week. Hopefully by next week, me and Mike will be node.js experts and drain a bucket in our gameplan!

Written on May 29, 2020