Week 2 - May 18 to 22
Now the second week - we knew what the water temperature felt like so we were able to cannonball in and start working on the back-ends of our project. I started the week working on getting an environment in WebRTC with two live webcam streaming of two people with two live streaming of the videos we uploaded last week. So we could have a controlled environment to test our ASRs while being able to be connected live. To be honest, I hit a brick wall for two days straight working on that set up - I could not get the two videos to be streamed while having a live streaming of ones webcam. So I consulted to my team and Norman William proposed we use a third party - ManyCam which is a software that sets up a virtual webcam connected to a media file. Brilliant! That was so much easier to use and implement into the WebRTC controlled environment we wanted to set up. Whew, now that was checked off the list.
After finally breaking down the brick wall, we moved on to working with the 3 ASRs and how their nodejs script is written. We also needed to get a sample transcript on the webpage in WebRTC using nodejs. Full disclosure, Mike and I have never really worked with nodejs or learned about it yet so we basically had to have a quick crashcourse on nodejs at the same time of getting familiar with it with the 3 ASRs. Now man, that was tough. Mike mainly focused on getting the transcript up on the webpage while I worked on the nodejs scripts for the 3 ASRs. I had to create a nodejs script for each ASR using their automatic programming interface (API) that would transcribe a sample audio file I extracted from one of our videos we collected in the first week and run it in terminal. I was able to successfully get all three to run with each of them having different output; printing to a vtt file, a text file, and straight on the console because of how each commerical had their script working. When I finished that, I went over to help Mike with setting up the transcript sample on WebRTC and successfully did that as well. With all of that, we were able to understand a little better on how nodejs worked and how it worked for the 3 ASRs, so that we could apply it to our controlled enviroment in WebRTC.
We ended the week working on the nodejs for each ASR to print their transcripts directly to a vtt file for the live transcription on WebRTC. MS Azure’s nodejs script already does that so we shifted our focus to Google and IBM Watson’s script. We weren’t able to complete that yet but that will be the first thing on our action plan for next week.