I hate burying the lead in these posts so I won't.
This is a point of real pride for me since I remain the only person working on the code base. While most Americans had a short work week due to the Thanksgiving holiday, I used that time to code.
(For some context, I moved to Austin a couple months back and am about 1,500 miles from my family. Given the state of things, I didn't feel comfortable traveling to see them for this holiday.)
Here's an abbreviated list of new features:
Took more than a few energy drinks with some electronic music blasting in my ears but worth it in the end.
While many of these items will not affect the core user base of the product who are using the tool to create quick stream recap videos, the flexibility for those who want to do more is now available.
The biggest piece in this entire process is the rewriting of the primary video process. Up to this latest update on it, it hadn't been modified since the initial launch of the service. It's not an understatement to say that those 150 lines of code were the heart of this service. In the end, through some changes in core architecture, I was able to get this down to 97 lines with three new supporting functions. This reduction in total complexity actually means that things should be easier for me to maintain while also being more stable in day to day use.
This rewrite made it possible for the often requested feature to have the individual clips available for download. Some creators just wanted to have the ability to drop the clips into their video editing program of choice. I don't like going back on initial assumptions and choices that I made. That said, in my continuing effort to extract my own limiting choices from the app, I knew this one had to be added to the pre-1.0 release list. When you hear a user suggestion for the umpteenth time, it's time to listen and resolve it.
Additionally, I took some time to make sure that the clips that are being generated include value metadata in their naming convention. For example, if the clip starts at 01:45:12 in the stream VOD, the resulting video clip is titled as "8623__01h_45m_12s.mp4". The leading number is an internal clip ID and is useful for ordering of clips. If an editor needs to go back and grab an 3 seconds before the clip begins to grab a segment leading into the clip, they know from this file name exactly where to go in the VOD.
With the system now using individually rendered clips instead of subclips, funneling all of these into specific folders was a breeze. Zipping those files including the rendered videos was also fairly simple. Delivering those zip files to the Digital Ocean Space that I’m using for that portion of things did take some effort since I’m so used to having local on a single server. Since Spaces are just a specially wrapped S3 space, one win quickly became many more.
With all these pieces in place, I decided that it was the best move to go ahead and re-download and re-render all the clips and produced videos for every stream created over the past week. This way I could know with 100% confidence that the storage space had a consistent naming convention and could be trusted for all potential download links. All in all, it took nearly 24 hours to do this but in the end it did work as expected.
Finally, instead of just saving those clips and videos for streams where a YouTube video is uploaded, it’s done for any stream and any clip requested. This means that if a streamer doesn’t hit their required number of clips to produce a YT video, these clips are still available for download or delivery to their own S3 bucket for use in a future super-compilation.
Even though all the initial 1.0 features are live and currently working as expected, this isn’t a 1.0 launch quite yet. I have a ton of little UI/UX, user onboarding, website changes and go-to-market actions that need to get finished. As it currently stands, I feel confident that I’ll be ready to go shortly after the new year kicks off.
I’ve got a list of 30 features to consider developing and I intend to go back through those in the coming weeks and sorting out which are worthy of pursuing and which order. More than half dozen of these are less than 2 hours to develop and deploy and therefore I might consider sneaking some of these in-between the other work to relieve the stress that working on things that aren’t code creates for me.
Hope you’ve had a great week and if you celebrated the holiday that you did so safely.
Since the earliest days of the pre-Alpha users, I knew that being able to declare start and stop points would be useful. Given the choices that I made early on, this was going to always be difficult to achieve. The reason being is that the entire system is based on a singular point of data. The math worked out as follows:
Echo Received (Point In Time) - Echo Duration - Stream Start (Point In Time) = Echo Start Point In Stream
By limiting the number of inputs, this made the initial build easier but it also made it easier to use the tool. If you as a streamer had to remember to hit start and stop on every clip, it's more work. And then you get into all kinds of issues with multiple requests and overlapping requests. It's just a world of mess and a bunch of user education I didn't think made the tool better. Plus, you know when something interesting or fun has happened, not 15-60 seconds before it does.
That said, there are cases where it makes sense to be able to start a clip before something interesting happens. These include the start of a competitive match, the start of a song or the start of a speech at a conference. As producers of the streamed content, we know when something in this category is going to happen. Setting that point aside, makes sense.
On a technical front, this effort required me to do a bit of rethinking of the entire Clip Requests infrastructure. Once I had a plan in place though, I knew that I could execute it in short order. All in all, it was about 9 hours of "in the zone" coding to get it done. Most of that time listening to the SavingLightCIC charity electronic music stream.
Here's how it works:
There are some serious limitations that I've put in place here to start. First, only the streamer can start or end an Echo Duration. Like many features since starting this project, I always try and restrict the number of people who can have things go wrong with it to as few as possible. This is a simple risk mitigation strategy.
Secondly, Echo Durations can at most be 60 minutes in length. While I do foresee an opportunity for live-streamed conferences and events to have a need for longer durations, those aren't the current customers I'm serving. If you are a customer and you have a need for a clip longer than 60 minutes in length, email me and we can discuss it.
Later this afternoon I will be deploying a setting in your YouTube Settings Page that allows you to decide how you want these Echo Durations processed. The default will be that your longer clips will be included in the existing stream recap video. The option to have these be uploaded as separate videos will be there for you.
When this option for "Post Echo Durations to YouTube" is selected, your existing branded first and last clips will surround the Echo Duration. The combined video will be exported and then uploaded to your YouTube channel as a stand-alone video using your existing YouTube settings including publication type.
This is a big ask of my original YouTube quota because each of these videos costs me the same 1600 quota points. So if someone has 3 standalone Echo Duration videos in addition to their stream recap video, that works out to 6,400 points. I have plenty of quota based off of observed usage up to this point but we are inching closer and closer to a 1.0 full launch. In the future, I may limit this to only posting the first Echo Duration to your YouTube channel and instead deliver the others to your account's home page where you can download them and manually upload them. I'll be sure to do whatever I can to communicate that if things change.
Finally, at the launch of this feature I have not included a final check for overlapping Echo Durations and clip requests when Echo Durations are included in the full stream recap videos. There are some technical reasons why but the main reason is because I am planning on re-writing the primary video processing functions this week. This is by no means a small undertaking as it is the underpinning of the result promised by Make Echoes.
Why would I re-write such a core function while the 1.0 launch closes in? Because I want to tackle another long standing item on my development board before the launch: sending the source clips to a destination S3 bucket while also providing a single zip download that contains the original renders of your clips in your account. This is targeted towards streamers who use an editor or those who want the finest level of control on what goes out to their YouTube channel.
I created Make Echoes to be a tool and not to prescribe what content gets posted around the internet on your behalf. Getting the tool to give you the results you're looking for is always the goal here. Getting my personal experience and biases out of the way to help you create the content your audience enjoys is ideal. You absolutely can expect more features leading in to the 1.0 launch and shortly thereafter to give you the opportunity to customize your and your community's experience of using the tool.
Finally, I've reached out to the entire customer base (including those in trial) to take part in a brief 10-question survey. I partnered up with a student team at City University of New York and their UI/UX Design school to have them work on Make Echoes as part of their cap stone project. This survey helps them complete that project and also helps give me direction on what I should be building next. If you have a spare 5 minutes, it'd mean a lot to me and the team if you filled it out.
Have a great week!
It's been a busy couple of weeks on many fronts. Let's just leave it at that.
I've been working on a bunch of things with Make Echoes behind the scenes. The platform continues to march closer to an official 1.0 release. Not sure of the exact date yet but odds on it will happen before the end of the year.
Let's get to the improvements that have made it live.
This was a request of @SgtFidget from the earliest of pre-Alpha testing. His normal game preference is in the horror game genre space. That being the case, timing on his clips matters to create a compelling recap video for YouTube.
Now there are three categories for clip requesters. Each has a value: streamer = 1, moderator = 2, approved user = 3. If a clip requester is of a lower numeric value makes a request for a clip while the Chat Bot is in a cool down of another clip, the system will now remove the previous clip and add the newly requested one in its place. If you're the streamer, the only person in your room that you can run into a cool down against is yourself. This makes a lot of sense since the video is being posted to your channel.
I started noticed some weird differences between what YouTube was reporting my quota usage was and what I had internally on the app. After doing a bit of digging, it was a bug in the code that was only being triggered when specific streamers were having videos uploaded to their YouTube channels. While this has zero impact on the user experience, it's the kind of effort that keeps things working. The last thing I want to experience again is the Day 1 bug which burned through my YouTube quota in a few hours.
Since the beginning of launching this app, I've kept a running text file of all the questions I've received. I now have quite the list and I don't have anywhere to expose that to new users. A quick couple hours of building and data input means that these answers are now on the server. Currently, they're not accessible on the site but they soon will be. I'm waiting until another piece of the UI/UX of the website comes together to expose them.
As per usual, I've got other pieces in motion but it's not their primetime yet. Two massive features are in internal testing at the moment so maybe next week...
Have a great week!
To say that this week was extremely productive would be a massive understatement. I've found myself a rhythm with the two-a-day workouts before and after my day job followed by coding for Make Echoes each night.
This week I reached out to @CarefreeCallie as she's currently in the new 15 day trial of using the service. Like all new customer outreach I've done, I asked if there was anything that could be going better. The response I received was one I never expected to receive:
"Is there an option for the MakeEchoes bot to not post in the chat every time I make a clip?"
Confirmation of receipt is a feature I've worked hard to be almost instantaneous. That feedback from the system helps insure confidence that it's working as expected. That stated, Callie's using the bot in a way I never had really though it would be. She's playing a story based game and is not simply capturing moments of exciting action like my FPS streamers are. She's grabbing interesting moments in the story arc, the puzzle solving and the general funny moments with her audience.
Once I saw this use case, the need for total number of clips goes much, much higher. Her request to quiet down the bot made perfect sense.
After thinking about it for about 15 minutes, I came to the conclusion that there was no technical reason that the bot couldn't be quieted. And much like I did with @ProfessorBroman's request for a streamer only mode, I went to work. By the time Callie went live again, the bot was quiet... just a little too quiet.
I muted all the messages when a specific user flag was present. That also included duplicate request notifications and errors which were vital information. Upon following up with Callie on her Wednesday stream, the fix was simple. I moved the decision of whether or not to send a message reply up to only involve positive confirmation messages. Done and deployed.
One area of this service that I hadn't updated in any meaningful way since the launch was the Chat Bot. Given the super simple code it is, this was a set of to-do items that got disregarded because it was "easy".
This is one where it does what it says on the tin. Upon receiving a request, the system no longer just confirms it received it. The response message now also includes the total number of clips that are in the queue.
Titling clips has been a hidden feature in the echo command set since the very beginning. I've intentionally not educated the user base on it. This is because that if a streamer is thinking of a descriptive title for the clip, it would completely misalign the timing of the clip. It was far more important that they get the timing right for the content.
Streamers, moderators and approved users who can submit requests for clips can also use the !echotitlelast command to add descriptive titles to the previously submitted clip. These titles will become important given some of the in-development changes. Titling will allow the streamer to quickly know which clips need to be reviewed and which ones don't.
My friend Chuck (@rynoranger on Twitch and a talented coder in his own right) upon seeing the !echoundo function for the first time reacted that this needed to be restricted because the opportunity for misuse is so high. Thankfully, the user base hasn't seen it be completely abused. That said, while doing all these other flags and checks via the Chat Bot, I decided to throw an additional setting in the Make Echoes accounts for the users so they can restrict the !echoundo to Streamer Only mode. This setting lives in the Chat Bot Settings page.
Previously, if a streamer had 3 or more clips the system would create a recap video for a user's YouTube channel. This was a completely arbitrary decision by myself and it felt oddly prescriptive.
I'm happy to report that users of Make Echoes can set the number of clips to start a video as low as 1 and as high as they want to place it. This setting lives in the YouTube Settings page of the account once logged in.
When I started sending email notifications that the render and upload had been completed from Make Echoes after a stream, I had previously just been using the email address that they had signed up with. I know that there are now a handful of streamers who are using an additional editor to fine tune the output from the service before pushing it live to their YouTube channel. Accordingly, these folks never would know unless the streamer routed that information to them via email, Discord or Twitch whisper. Anything that is dependent on a human-to-human communication in this kind of situation is an area where mistakes can happen. That's a kind way of simply saying sometimes the machines just do it better.
To help in this situation and get the streamer's editors notified that the source video is up on YouTube and downloadable inside the Make Echoes account, the streamer can specify the list of email addresses that they want to receive the notification. This setting also lives in YouTube Settings page.
I hope that this post shows just how much is happening on the platform. Without question, it is becoming more robust and more capable of handling a range of situations that the user base throws at it.
There's more news (and even more development news) but since it's not finished yet so I'll hold off.
Go have a great week and hang a few wins on the board,
One of the fun and challenging pieces of working in custom app development is bug hunting. These "bugs" as they're called are when programs don't run as expected. The past three weeks there has been a string of failures that was inconsistent.
It seemed to affect one user in particular but not on all of their streams. This is a programmer's nightmare scenario where an action fails only some of the time. Making things even worse, simply re-running the script to generate the stream recap video would work.
In these cases logs are your best asset to sort out the problem. Just a reminder, m3u8 files are playlist files of 10 second little video clips and this is how Twitch stores stream VODs. These have been the source of another but separate problem.
Pulling more and more of the pieces together now. Twitch changed the way they're doing .m3u8 files. This change includes as many as 5x duplicates of the pieces in it which is why I was seeing HUGE file sizes in the beginning.— Make Echoes (@MakeEchoes) September 2, 2020
It looked like a few missing frames from some of the last clips when the video was being reconstructed. With that knowledge and paired with re-running it the stream capture would work, I have to pin this one on Twitch. Specifically, I believe that the video at full-resolution hasn't finished being stored on Twitch's VOD storage servers.
So how does a programmer solve a problem like this? You build an automated kicking machine.
When the video construction fails, I put the worker performing that task in a timeout for 10 minutes. After the timeout finishes, that worker does three tasks. First, removes the previously collected local data for that stream. Next, it attempts the download again. Finally, it does the reconstruction again. If this second build fails, the system quits trying and I'll manually kick it the next time I check the back end of the system.
While this method does double my bandwidth inbound on these streams, I'm nowhere near my allotment at my host. As it only is affecting 0 to 2 streams each day, it's an acceptable resource to result allocation.
I've made the deployment of this solution earlier this morning. Here's hoping that those affected will begin to see the quick response times again.
Have a great week and hoping, hoping to get a couple new features built and deployed next week.
It's been 5 weeks since I updated anything on Make Echoes and that's because a lot of life has happened to me. Let's lead with the lead though.
After looking at the usage data and the available capacity, I've decided that I'm willing to start trials accounts on the platform.
Effective immediately, new users can use the full system for 15 days. Obviously, I still have quotas so this is limited to the remaining seats on the platform. My hope is this is going to help more streamers get a feel for the tool and make a decision based on their actual usage.
If you've been on the fence, here's your opportunity to come in at no cost (you will have to place a credit card on file in Stripe though) and run the tool through its paces. There are no limitations or restrictions above our standard User Agreement.
Just cancel before that 16th day and you will never be charged.
I've worked with the Yoga With Adriene/Find What Feels Good team for over 5 years in various roles. I know that they are a tremendous team and that their business is as stable as any content creation entity can be. It also helps that I genuinely enjoy the human beings who make up the organization.
As August ended, the YWA/FWFG organization offered me a job. I accepted the role of Senior Web Developer and started on October 1.
In response to accepting this job offer, I made the choice to move to Austin, Texas. The vast majority of their team is based here and being local, while not required, is meaningful to me and to them.
Moving is always stressful and moving during a pandemic is even harder. This was my first time towing a trailer with anything larger than a jet ski and my first time towing with my own truck. I learned more than a few things both about myself and the truck in the process. I did the drive solo, only stopping for gas and a single nap to get me out here in as safe a manner as possible.
I am now unpacked and settled into the new apartment here and have got the first couple of weeks behind me in the job. I am feeling confident that going forward I can dedicate 5-15 hours a week to Make Echoes. That time will be spent building out features and investing in marketing.
My intention is that I'll be tackling 1-2 new features each week. I will then deploy them on Sunday mornings. That time slot has proven to be the least streamed time. Should something go wrong in a deploy, it will impact the fewest number of streamers.
Another eight days have passed and as expected with young SaaS businesses, there have been a lot of changes in what has been happening behind the scenes.
The site continues to acquire new customers at a higher than expected rate and even though I stopped all marketing and promotion for reasons that will become really obvious further in the post, Make Echoes currently has 25 paying users. That's an increase of 5 folks in a week completely dedicated to non-marketing activities which is positive. Rocco and I continue to talk about the potential for marketing but we both want multiple days, if not an entire week, of completely hands off with things working as expected.
If you were paying attention to the @MakeEchoes Twitter account this week you may have noticed that I re-tweeted a pair of tweets from @ProfessorBroman. Through a sequence of events that's still not entirely clear to me, he was aware of the product and has been giving it a shot since he's been frustrated in working with editors previously. The ability for Make Echoes to let him make the !echo commands and let him control the inputs means that he controls the output.
ProfessorBroman had an interesting request on Day 1 of his usage and I thought it was important enough to drop everything and build it: The ability for the streamer to be the only one who can add clips.
On my list of someday features I had the ability for the streamer to turn off clip requests for an entire stream. Ends up if I make it streamer only, they can simply not request clips and it achieves the same result. It took longer than I care to admit to get this functionality built and functioning as expected in all use cases where it wasn't blocking Moderators or previously approved users. That said, by Thursday it was operating without issue and allowed me to cross of multiple items on the to-do list which always makes me happy.
Through the course of the week I faced two positive problems: my initial attached storage filled up with so many videos created in the last 7 days that I need to increase it by 50% and that a single 14+ hour stream was at a high enough resolution and frame rate that the server couldn't handle it.
The first is a relatively straightforward solution of simply paying for more storage. That said, I will end up having to undo this work in the next couple of weeks as this storage space is about 5x more expensive than the S3 compatible Digital Ocean Spaces product.
To solve the second, the easier solution that wouldn't require me to rewrite core code was to actually upgrade the server entirely. Sure, it'd double my server cost but in exchange I'd be getting 4x the local disk space and as importantly 2x the CPUs and RAM. These extra resources are allowing videos to be rendered out even faster than before. At this point, the core bottleneck is the number of workers I have downloading the VOD parts. With the new server, in theory I should be able to double that. It's something I'll end up testing later this week.
Finally, in an effort to begin separating things so that they can be scalable as appropriate, I moved the chat bot on to it's own little server. The reasons for this are two-fold at the moment. The first is that any time I did an update to the core server's code that required an update for the background running practices, that restart would cause the chat bot to restart as well since it's a background task. This would cause the 29:59 runtime clock on it to fall out of synch with the :28 and :58 reboot schedule and as a result multiple chat bots could be in a room and could add the same clip multiple times to a stream recap video. By moving to a separate server, this won't ever happen since it'll continue to perform its collection and processing regardless of how many reboots I may need to do on the primary server. Secondly, by having it live on a separate server, I can closely monitor this task as I have a suspicion that this process is where the CPU spin-up was originally coming from. If that is the case, any runaway process now won't effect the website, the account side or video rendering. All in all, a well spent $5/month from my current perspective.
There are a bunch of additional little items that crossed off the board this past 8 days including:
This week is a smaller development effort and it's mostly circling back to the previously mentioned items.
Hope you all are having a great start to the week.
To say it's been an eventful 4.5 days since the launch of Make Echoes would be a bit of an understatement. The Twitter response from the Twitch streamer base was honestly amazing. Special respect goes out to the Destiny directory which really did a lot of lifting on amplifying the social reach of this launch.
As it stands right now, 20 of the 100 Alpha slots are spoken for and honestly, I haven't done half of what I wanted to for the initial launch. A lot goes into holding some of those actions in reserve but the biggest reason was that I wanted to make sure the service worked as anticipated. This week the service went from some 7 test streamers to 29 total users and it started to show some strain and some incredibly odd behavior under the additional load.
That said, between the test streamers and the new customers 45 new YouTube videos were produced since Wednesday's launch. With less than a half dozen showing some kind of unexpected result and most of those were resolved and re-uploaded within 24 hours.
As I think I've made clear, this is still very much and Alpha and Thursday morning I woke up to just over 200 emails from my server complaining about not being able to upload a video. My YouTube API quota is well below the 200 video uploads a day so I knew that upon opening my inbox and seeing those emails that it was going to be a very rough day.
Ends up that through a completely unexpected series of events, some string manipulation in relation to YouTube posting data created an invalid entry in the YouTube description. The status code and error message that were being sent back by YouTube were ones I had never seen before and therefore the tool didn't have a way to process them. So Make Echoes being the good bot that it is, kept trying to send the stream recap video that it had created up to YouTube. This is how it blew through the entire day's quota in about 45 minutes.
I've put some restrictions in place to prevent this from happening again including a localized version of the YouTube Quota where I actually deduct 1605 points vs. YouTube's 1600 just to make sure it always stops short and a counter that after the third upload attempt it kills the entire script. Additionally, I've made sure to do a final REGEX against the title and description strings in order to eliminate any of the kinds of characters that can produce this result again.
This is exactly why I wanted a limited number of users work with here at the start. Uploading a dozen videos at between midnight and 6A to make up for a lost day's uploads is fine. Doing 100 would have been a stretch. Doing 1,000 likely would have broken me and the way the system is set to create these videos. Hoping that the YouTube Quota is never a problem again though given all the changes.
Shortly after the first users signed up for their accounts and the Chat bot got in their chat rooms, the CPU utilization went from <2.5% to 40% and then held in that position. While I was handling a bunch of issues for customers and then the YouTube Quota Exceeded event, it was in the back of my head that some of the other weird behaviors I was seeing from the server could be related to this high CPU rate.
The first two attempts to nail down the source of it were completely unsuccessful. Friday evening, I made the decision to bring the service down the next morning to try and really sort it out. Fortunately, I went to bed and woke up with a new idea to check and sure enough 18 additional characters in the right file on the server and a quick full reboot instantaneously produced being fully connected to all the chat rooms, doing renders and file movement in the expected CPU usage ranges. Most importantly, when the server wasn't under large user load or rendering a video it was back at less than 3% CPU usage 92%+ of the time.
With the server in a good spot and the first customers creating videos that were successfully posted to their YouTube channels, I spent yesterday afternoon and evening tackling the two biggest feature requests that I had heard from the test streamers and the new customer base: The ability to remove the last clip and the ability to give viewers the ability to add clips.
The !echoundo or !echodel commands are actually incredibly simple in practice but when doing testing on one of my own streams I found that the repeated use of them could create loss of clips that one might actually want to keep. Accordingly, this feature is limited to clips generated in the last 5 minutes. This gives someone the ability to remove a known bad clip in the chat without having to go to the back end of Make Echoes and delete it. By limiting it to just the clips generated in the last 5 minutes, it limits the damage that a single mod/approved user could do should they go rogue.
Since day 2 of using Make Echoes @SheSnaps has asked for the ability to have non-mods add clips to the stream video. I had hoped to sneak this one in before the release but it just didn't happen since I ended up having to build the website three different times (war stories for another day). That said, this is another simple to perform feature if one already has the API frameworks tested and the chat bot pushing and pulling data. Sure enough after about 3.5 hours I was able to get it functioning and tested it last night without too many issues so I decided to throw it into production and allow the user base to use it.
!echoadduser + the target twitch name or !echouseradd + the target twitch name will now allow a normal viewer to be capable of having their suggestions added to the stream recap video.
!echoremoveuser + the target twitch name or !echouserremove + the target twitch name will remove them from being capable.
Additionally, for users who do not have permissions I now have the @MakeEchoes chatbot responding in the chatroom that they don't have permissions. The hope here is that if they know that they can't add that they won't spam the chat room.
The current list of features that I want to have built before I call this a 0.2 was 23 items long. With the work put in since the launch, that list is now down to 16. Here's what I've got my eyes on for this next week or so:
Overall, with where I was 8 days ago and feeling uncertain of both the tech and whether or not it had a potential user base to where this all is now, it's a hell of a turn around. And the energy that the user base and those looking on via social media have provided has only made what I lovingly call an "obsession project" (not a passion project since this has literally taken over my conscious hours in a way nothing else ever has) take over even more hours and having me miss out on more sleep to get all the pieces right. It'll calm down soon as I go back to working on contract work but for now I'm rolling with it.
Go and be awesome in the world. We all deserve it.
Even though this entire platform uses Python, I simply couldn't miss the opportunity to open up this dev blog with some coder humor.
After nearly 7 weeks of a full-out sprint to build up the custom functionality needed which resulted in hundreds of commits to GitHub, Make Echoes is ready to support more users.
As with most finish lines in life, this is just another starting line.
This is the start of where this tool gets it's time to see the light of day which is a big moment for any product. No longer is this tool simply for the incredibly generous test streamers that we were working with through the initial development. (Test streamers, I've said it privately, I'll say it again here publicly; Thank you for everything you did. Each of you made a meaningful recommendation or note. Your feedback lead me to create more features in hopes of solving the problem of generating YouTube content from your streams in a fast and efficient manner.)
With the Alpha release the following core functionality has been completed:
This is no small feat as a solo programmer. There were more than a few times where I was frustrated, annoyed, overwhelmed and/or upset about how things were going. That said, this project, this customer base, this community of content consumers, all continued to push me toward completion.
Every time I would speak with someone about this project and then show them the videos that it was outputting automatically and the shocked expressions I received were a sign I was developing something potentially universally useful.
Now here we are 47 days after a Twitch whisper started all this, two approvals from Google/YouTube and one from the team at Twitch with a functional software as a service application available for purchase.
I'm writing this less than 16 hours before the Alpha sale goes live and I still don't know what kind of response this is going to get. Maybe there are 5 people who see the vision that this product has and are willing to pay to try it out. Maybe all 100 seats sell. Maybe the waitlist fills to an insane level.
All I can do as a coder and, in this instance, as an entrepreneur is put this out there and let the market decide whether or not it's worthy of finding a customer base.
On a technical front what's up next for Make Echoes:
Depending on how this launch goes, there may be additional to-dos added to my list.
I also want to start working on the next set of features.
First up, is the streamer requested email notifications of when the video has been uploaded with a link that'll take them to the right page in their YouTube account so they can do any updating/editing to it when it's ready. This would eliminate them have to check back in on their YouTube channel and hope that Make Echoes had finished its work.
Also this next batch of features that will be developed includes the most frequently requested feature: the ability for a streamer to allow certain usernames to submit clips from their chatroom as well as their moderators.
My intention for this blog is to keep it fairly tight around just what I'm working on, what I've deployed and what's up next. Since this isn't my full time job, there may be some irregularity in how all this happens so consider this your warning as there may be times with a week or so between posts.
But first thing's first, a silent moment of celebration as the initial prototype build is complete and it's time to do the necessary listening, learning and building to allow this to grow Make Echoes into a full-featured 1.0 release in the coming months.
There will be failures. There will be mistakes. It's part of the process. If in the end this helps someone along their content creation journey, it will have succeeded as a utility. Here's hoping Make Echoes lives up to the potential that so many see in it.