Week 4 - June 1 - 5

Alright - week 4 down! As mentioned in week 3 blog, last week, we hit a roadblock. This week we ran roadblock that down! The week was very productive, successful, learning, and we got a number of things checked off our action plan.

To start, remember when I mentioned that we needed to really go back to the basics with nodejs and how it works in our codes, yeah well we started off the week with that. Mike and I needed to re-understand nodejs however as we kept doing research on how to implement our ASRs to our WebRTC and how to write up the ASR scripts in nodejs, we started to get familiar with it. We also confered a lot with our team and they answered all the questioned we had. So with that, we were really learning about it more as we were working on it and understanding the errors and that I believe is the best way you could learn the language and code. So we didn’t really spend much time on going back and relearning but more of learning from the codes we already have and our team.

The game plan of the week was really to first, get more comfortable with nodejs which Mike and I did through the coding and the help of our team as explained earlier, attempt to get GCP Web Speech API and IBM Watson transcripts up like we already had for MS Azure, and attempt to apply those to WebRTC. However, as we got started to work on the 2 ASRs - GCP Web Speech API and IBM Watson, we realized two things. First, IBM Watson is a COMPLICATED and TOUGH interface to work with. We hit more than one wall with the interface and there was honestly error after error. So we, as a team, decided to drop working with IBM Watson and focus on GCP Web Speech API and MS Azure. The second thing was, GCP Web Speech API, WebRTC, and MS Azure all required access to webcam and microphone of your computer. Easy right? Just enable it in the settings of your browser. But no, of course it was not easy, the problem was with the browser we are working with, Google Chrome, they require HTTPS to enable the access. And to contain HTTPS in our server - we needed to have a SSL certificate and that we did not have in our original sandbox Open server. Soo. Mike first tried to set up an unoffical certificate through a GitHub repository called “mkcet” but that did not work. You see, Google is not stupid and they autmoatically recongized that the certificate was not offical so we had to go with another option. Get an offical certificate through “Let’s Encrypt” which provides a free three months certification. However, here comes another problem. For Let’s Encrypt to work and vertify the SSL, it needs to access the server on port 443 but of course, Gallaudet University blocks it by default. To get a ticket to the open our VM’s port 443 and have Gallaudet unblock the port - it would take several days and that we do not want. So Raja and the team decided to go with another server that he installed in his home. The server would be outside of Gallaudet’s network and enable us the access to port 443 to get the SSL certification to work. With that, the rest of the day and two next was all about getting the new server up, installing SAMBA and RDP in the new server for access to edit/read/open/save files, and cloning WebRTC and its files into the new server. We needed the new server - Sandbox2 to be configured like the first one but with SSL certification so we could test our ASRs and implement it into WebRTC.

Whew, there were a lot of problems we ran into but we got through them all and got Sandbox2 server running smoothly and better than the first.

After we got through all of the walls with the SSL certification and the set up of the Sandbox2, we moved on to working on getting MS Azure and GCP Web Speech API to connect to WebRTC and have it stream the API. We started with some more research and testing then realized that we could go back a step and have EasyRTC, which had a simple sample of RTC of audio and video that later builds into WebRTC, installed in our server so we could test the ASRs and get it connected. There were demos in GitHub so Mie cloned them into Sandbox2 and connected it with the SSL certifications and we were able to get it working. Now we needed to get the ASRs to work in the enviroment where the live streaming of the video and audio is also shown. Mike was able to set up a window where the video was connected and the GCP Web Speech API was working below it with streaming the audio from the microphone and displaying the text in a text area. I was able to get mine connected as well - based off of Mike’s coding. We had the ASR working with two different modes - one with a button that initiates the listening mode which starts the ASR and another that starts the minute the window is loaded meaning it is in constant listening mode. We set up the second mode because that would be way to implement into WebRTC’s RTT. Whew, finally a successful check off the action plan that did not include a big brick wall in the face. Now that was progress, in comparision to the previous week!

This was all in the end of the week so we ended on a great note. We got most of what we wanted done - and more, in the action plan. Our next plan now is to have the 2 ASRs work in WebRTC, in Norman’s coded script that he had already wrote for the RTT. See, Norman wrote a script that set a textarea/caption area for all of the users’ videos to be able to communicate with text live, like the videos. The script would record the keystrokes and send them all out to the users in the room, they would be able to see letter by letter, as if they could see you typing. So with the code that is already written - we wanted to use the same functions but change it so that it would connect to the ASRs and display transcript of spoken speech in the same textarea. That was the next thing on our action plan but we hit Friday first so that will now be our first wall to tackle in week 5.

Overall, the week was a great and productive one. We were able to complete our game plan and win the game (aka win week 4). Now it’s time to battle WebRTC and win this game as well.

Written on June 5, 2020