Johnathan.org
Johnathan Lyman

Johnathan.org

My name is Johnathan Lyman. I'm an engineer at Papertrail, a huge Apple nerd and semi-regular blogger. I enjoy bubble tea way too much and find Farming Simulator relaxing.

Find out more.
2014 – 2018 Johnathan Lyman. All 339 posts and 12 pages were made with and in Seattle.

Powered by:
Uptime:

July 2016 Archives

My Learn Ruby on Rails List

Programming

I’ve been meaning to write this post for a while and kept getting distracted. This evening I sat down and finally hammered it out. The goal is to share some of my favorite Ruby on Rails learning resources with the community. Let’s get started!

Ruby, is an incredibly versatile programming language and is the 6th most popular programming lanuage on GitHub. By extension, Ruby on Rails is a powerful web application framework that powers some cool sites: GitHub, AirBnB, Funny or Die, Groupon, Hulu, Square, and Urban Dictionary are a few of a pretty cool list. Neat.

If you’re interested in getting started in learning Ruby on Rails, I’ve put together a list of some of my favorite online tutorials, books, screencasts, and courses to get you closer to becoming the Ruby and Rails expert you always dreamed of being.

Books

cover-web.pngThe Ruby on Rails Tutorial

Michael Hartl’s course is probably my goto resource for learning Ruby on Rails. You can read an online copy of his book for free, but buying it means you get acess to the answers guide and optionally screencats.

I’ve worked through this book when it was written for Rails 4 and I learned a lot about the basics of Rails. It covers just about everything including MVC, testing, databases, and deployment.

Coming November 2016 is a print version of the book. If you’re a fan of print material, this one should definitely be on your list.

Learn Ruby On Rails For Web Development

John Elder is a veteran programer at Codemy and his book is a killer resource for the absolute Rails beginner. Learn Ruby on Rails For Web Development covers a lot of the same material as Michael Hartl’s book, but if you can’t wait for a print version, I’d grab this one. Plus, if you’re a Kindle Unlimited subscriber, the book is zero dollars!

Screen Shot 2016-07-29 at 4.31.03 PMLearning Rails 5: Rails from the Outside In

If you’ve ever read an O’Reilly book in your life, you’ll feel right at home with the familiar single-color cover theme and interesting animal choice. In this case, Learning Rails 5 rocks a horse-looking animal but don’t let that deter you. Mark Locklear and Eric Gruber guide you starting with the simpler components of Rails and gradually introduce you to complex topics.

Online

Code Academy

Screen Shot 2016-07-29 at 4.34.31 PM

If you’re like me, you enjoy a good online course. Code Academy provides just that with a five hour introduction to Ruby on Rails. If you learn well by doing, this might be just your thing. Code Academy teaches you the basics using various projects to highlight components of the Rails framework.

Code School

Screen Shot 2016-07-29 at 4.40.17 PM

If you’re looking for more in-depth online course learning, Code School is your answer. Each of the courses here are more in depth and cover more complex topics. Don’t worry, though. Code School covers the basics, too. You’ll find yourself spending a lot more time here, vs Code Academy.

Rails Casts

Screen Shot 2016-07-29 at 4.50.28 PM

While new videos haven’t appeared in a few years, the information is still relevant and new videos are coming soon, according Ryan, the site’s creator. To date, there’s roughly 400 videos to watch, though most require a subscription ($9/month).

Coder Manual

Screen Shot 2016-07-29 at 4.48.49 PM

While it bills itself as a coding bootcamp, I’d say Coder Manual closer to a regular online course. Coder Manual is incredibly in-depth and the videos are cut up into small chunks to make them easy to consume. You can follow along with your own project as well as get the materials used in each section.

On the flip side, beyond rails knowledge, you’ll touch on HTML, JavaScript, and even job hunting. It’s a bit on the pricey side but for the material and education you receive, I’d say it’s worth it.

Bootcamps

Coding Dojo

Screen Shot 2016-07-29 at 5.00.32 PM

If you’re looking for a more serious, structured course, a bootcamp will likely meet those needs. Coding Dojo’s 20 week program teaches you not just Ruby on Rails, but Python and Web development fundamentals. If you’re up for it, you can learn on site, too.

Bloc

Screen Shot 2016-07-29 at 5.13.18 PM

I’ve always been a huge fan of Bloc. It’s pricey, but they have financing options and the courses are some of the most in-depth I’ve found. You’ll meet with someone at least once a week as you work through the program and get the opportunity to build real applications that do what you want them to do.

Launch School

Screen Shot 2016-07-29 at 5.07.47 PM

If the idea of paying a ton of money up front doesn’t sound appealing to you, you’re not alone. Launch School offers crazy in-depth courses including front-end, back-end, APIss, and career assistance for $199 a month. If you’re like me, you spend half of that on complicated coffee drinks every month, anyway.


Did I miss anything? Let me know in the comments, below, if you have any great resources you think I should add here.

Polymail

Reviews

I’m super excited to write this blog post. I’m always interested in new mail clients (I miss Mailbox). When I found out Polymail dropped for both iOS and Mac, I jumped on it. Apparently so did everyone else, which lead to some problems.

If you’ve never heard of Polymail before, don’t feel bad. I didn’t hear about it until super recently, myself. Polymail has a super slick UI that doesn’t waste space with stuff you don’t need. It features killer email delegation and reminders (remember when Mailbox let you put off an email until a date in the future?), get information about the person you’re talking to, and keep all this organized and synced between all your devices.

Triage

Let’s talk about the follow-ups, first. I’m a huge fan of triaging tasks. If it’s not time-sensitive, it doesn’t need doing right now. Granted, if there’s nothing else that needs doing, that logic doesn’t apply. This is super helpful for those with noisy business inboxes where everyone and their grandmother is clamoring for your eyeballs to absorb their textual essences. Sorry grandma, I’ll take a look at that chain letter tomorrow at 4:30 PM.

You can choose to follow up on a conversation using one of the preset dates, be super noncommittal and say “read later”, or pick an arbitrary date in the future, because you’re a master of your schedule and you know you have sixteen time slots open right now between today and Christmas 2018. Those voids of sadness need filling!

On the flip side, you can remind yourself to follow up with someone else if they don’t read your email. This is a neat feature, but be careful. You can very easily become “that person” that everyone in your office hates. You know which person I’m talking about: “hey did you get my email?”

With great power comes great responsibility. Can I trust you to not abuse it?

Following Up

When you’re writing your digital prose to the person on the other end your email exchange, knowing who the heck they are is important. It’s even more important in cases like candidate screening or figuring out if the person is real, or not. Next time you get resumes for a job application, use their information Polymail will glean about them on your behalf.

The Downside of Cloud Sync

While writing this quick review and even attempting to use Polymail, I ran into two problems that I think are worth noting.

First, cloud sync is a very dangerous territory to enter if you’re not prepared. In the case of Polymail, I don’t think they were. The sign up process requires you give them access to your email accounts, which is fine. The problem here is, they do everything through their servers. The emails don’t go straight from gmail.com or outlook.com or your O365 account to the client. Nope. That’s too easy (or hard). I found this to be true when after about six hours, Polymail claimed I didn’t have new email. If only that was *really *the case.

Hi all! 👋 Initial mail syncing may take a little longer due to high traffic right now, but we’re working on it. Thanks for your patience! 💌

— Polymail (@PolymailApp) July 21, 2016

That’s the tweet Polymail posted about the delays. Given I haven’t received more than one round of emails, I’d say it’s more than a delay.

My second problem is moving email accounts from one Polymail account to another. I wanted to use Polymail for work, too, so I created a work Polymail account. I also wanted to get my work email on my personal devices using Polymail. My routes were to use my work Polymail account everywhere else or move my work email to a personal Polymail account. I opted for the latter but ran into an issue.

My work email address is stuck in some sort of “account will be deleted” state that won’t progress. I didn’t think it’d take hours to delete an account, but I guess so. In the mean time, no work email via Polymail.

End of the Day

Originally I was pretty hesitant to jump in because of cloud syncing issues. This morning, I checked on it and everything seems to work well now and I was no longer getting the error in adding my work email account. So with all that being said, I’d definitely recommend Polymail.

You can pick up Polymail from polymail.io for the Mac and the App Store for iOS.

Live Streaming with Hardware Acceleration using a Raspberry Pi and RTMP/HLS

Programming

If you’ve been following my blog post series on the development of my ever so useful cat cam, powered by a Raspberry Pi, you’ll know I’ve made several attempts at a more stable and scalable streaming solution for my Cat Cam. As it stands today, I’ve been using Motion. While it’s a decent tool, Bandwidth has been my primary concern and I’d like to be able to stream real-time without sucking up what measly bits my ISP gives me if more than a few folks decide to show interest.

So far we’ve tried ffmpeg => ffserver and that turned out exactly how you probably thought it would. Next, I tried swapping ffserver with an Nginx-powered RTMP server. While not an entirely fruitless endeavor, there were some blockages that I just couldn’t get past.

I received a suggestion from a colleague to fire up the Raspberry Pi’s hardware encoder/decoder. Up until yesterday, I didn’t know this was a thing. Shame on me for not looking into it. So that’s what we’re going to cover in tonight’s post: taking some of what we learned from our first RTMP attempt and make the hardware do all the work. With any luck, we should see some real perf gains, possibly enough for live streams to start instantly (which would make web players happy).

Since I felt like including it here would deviate from the purpose of this post too much, I wrote up how to Add RTMP Support to Nginx if you installed it via apt-get like me. If you’re in that boat, take a moment to read over that post then come back to this one.

Setting up ffmpeg to use hardware H.264 encoding used to be a fat challenge, but they’ve since added support to the official codebase. If you followed my original ffmpeg post, you’ll have a recent enough version that includes this code, but we’ll still need to compile it.

What we’re looking for this time is the OpenMAX IL (Integration Layer) acceleration module.

pi@pi:/usr/src/ffmpeg $ sudo ./configure --enable-omx --enable-omx-rpisudo makesudo make install

That’ll take some time, as I’ve said before. You’ll have enough free time on your hands to get make something to eat. Come back in an hour or so and it should be done.

NOTE: If you run into ERROR: OpenMAX IL headers not found, then you’ll need to run apt-get install libomxil-bellagio-dev. Thanks, lordofduct in the comments for that one!

From this point forward, we’ll be starting ffmpeg similarly to how we did it before but with a slightly different codec.

ffmpeg -i /dev/video0 -framerate 30 -video_size 720x404 -vcodec h264_omx -maxrate 768k -bufsize 8080k -vf "format=yuv420p" -g 60 -f flv rtmp://example.com:8081/hls/live

I confirmed VLC is able to play the stream, which is excellent, and there are no lag or jitter issues. It’s about 10-15 seconds behind live, which is totally fine.

I was able to set up an HTML5 player using tools from Bitmovin. I’m not entirely happy with this setup, though, as the player isn’t free and only HLS is supported, right now1. In my next post I’ll cover a new idea that came to mind when looking into the coolness of Ruby on Rails 5: WebSockets.

Update July 11, 2017: @HPGMiskin pointed out libomxil-bellagio-bin is not a thing. I’ve pulled that from the optional step for missing OpenMAX headers.

Revenant

Blog

It takes a lot for a movie to convince me to write a blog post about it. I generally don’t do movie reviews, and this isn’t one. What this is, is a reflection; a documentation of experiences and feelings. For the next few minutes, I want to pour myself out just a bit.

The Revenant was an impulse buy. I was strolling through Target and saw it on the shelf almost by happenstance. In the most ironic fashion, I quickly checked to see if it was cheaper on Amazon. To my surprise, it wasn’t, so I grabbed it. With my upcoming foray into 4K, I grabbed the 4K UHD + Blu-Ray + Digital HD combo pack for about $25 plus tax.

After grabbing some lunch to go, I promptly shuttled myself home and popped the disc into my PlayStation 4. I had heard good things about this movie and I wanted to obtain the highest quality experience I could muster with what I had. This meant grabbing my Bose noise-canceling headphones and plugging them into my PlayStation4 remote and opting to pipe all audio through them.

Best. Idea. Ever.

(some mild spoilers ahead)

From the very beginning, I knew I was going to enjoy this movie. I will quickly get enthralled with any kind of atmospheric experience and I feel The Revenant definitely delivered.

The last time I felt my emotions match that of the character on screen, I cannot recall. This film isn’t just about fur trapper in the vast and monolithic landscape that was the unexplored west of America. It’s not survival movie. Leonardo DiCaprio’s character isn’t battling nature.

This movie is a battle of man. The struggle between human nature and nature’s humans. As the song goes this land is your land, this land is my land, from California, to the New York Islands… except it wasn’t. This land was vast and untouched. Scar-free. Void of the cancers that man inflicted on it.

It’s also man versus himself. Man versus his inner corrupted soul. Man versus his desire for more at any cost. Man versus his bloodlust for alpha status.

Throughout the journey on which the film takes you, a feeling of sadness sets in and you start to feel overwhelmed with the chaos unfolding. The fight against (definitely not with) nature will take its toll on you, the viewer.

The Revenant | © 20th Century Fox

When you start to feel what the characters are feeling… when you find yourself frustrated and wishing for nothing more than for Hugh to pick himself up and keep going… you quickly realize that’s not how this works.

Fighting against man is something man will do forever. Man’s mind is narrow, shallow, and full of arrogance. If they’re not careful, man will consume itself.

You’re in a battle with yourself and the world around you (in a metaphorical sense); there isn’t an endless supply of energy and motivation. There’ll come a time when you feel defeated.

That’s when you choose what’s most important to fight for. Do you fight against one or fight for another? Where do you think your real hidden strength lies?

Coming back to nature for a moment, in this context, nature is the helpless victim in this battle. Nature will fight back, but can only fight for so long. The infamous bear scene is a pure and gritty example of this. This isn’t a bear attacking the character, this is nature defending itself.

Until it can’t. Man has won… but at what cost?

The Revenant | © 20th Century FoxThe Revenant | © 20th Century Fox

Weaved into the major story is a minor story about personal loss. Imagine what it would be like to lost the ones you loved to such savagery with no way to stop it. Just watching the world around you fall to pieces is enough to make anyone lose their shit. The commentary on this is: it’s real. Those who are closest to you have the most profound effect on you both when they’re by your side and when they’re gone.

By now you’re probably thinking I’m out of my damn mind. Being unaware of the complexities is one thing… to pretend they’re not there is another.

The Revenant | © 20th Century FoxThe Revenant | © 20th Century Fox

The larger message of this film is how man took over the land we call America. It was gritty, chaotic, violent, and fucking selfish. The White Man was a savage beast that stopped at nothing to get what he wanted, all the while thinking the natives were the savage ones. The hubris was overflowing. The amount of blood, tears, sadness, and anger this country was built on is enough for millions of lifetimes. The expense? Heartache. Loss. Sadness.

Director Alejandro Iñárritu is a master craftsman. The visuals are pristine, as was the land before. The entirety of the film1 was shot with all natural light. If it’s dark, it’s dark. If it’s light, it’s light. The absence of artificial atmosphere will pull you in. You’ll be in awe of the landscapes, shot mostly in Canada and Montana. The crispness of the air, the ice-cold flowing rivers, and the crunch of the soft-packed snow will stimulate your senses.

On top of the visuals, Ryuichi Sakamoto’s soundtrack does an amazing job of hitting home the feelings described above and experienced throughout the movie. I’d be surprised if you don’t feel like you need a hug after witnessing what it’s like to be alone, clinging to life, with only yourself and what drives you to keep motivation at least sub par.

But don’t take my word for it. Don’t take my word for any of it. Go watch it, yourself. Go experience it, yourself. You’ll thank me.

Add RTMP Support to Nginx Installed From Apt

Programming

In the process of trying to figure out the best streaming solution for my cat cam, I had to deviate a bit. I combined my RTMP server for my cat cam and Web server for johnathanlyman.com into one and the latter didn’t have the RTMP module installed. This module is required for my attempts to push H.264 video and have Nginx relay it to whomever is watching, cutting down on the bandwidth of my one-to-one reverse proxy setup I have, now.

It’s a pretty straightforward process to re-compile Nginx, but there are a couple extra steps involved if you installed Nginx from a package repo. I’ll be sure to cover these. What we’re doing here is re-compiling a deb package. By going that route, we’ll adhere to the same methods in which Nginx was installed in the first place so we don’t have two competing installs.

Like my other RTMP/Nginx-inspired post, I’m using Ubuntu 16.04 (xenial), so everything will revolve around that.

Before we begin, we’ll want to make sure we’re updated and ready to go:

apt-get updateapt-get upgrade

Next, install software-properties-common if needed then add the nginx/stable ppa.

apt install software-properties-commonadd-apt-repository ppa:nginx/stable

Now we’ll be able to grab the source files from the repo.

cd /usr/srcapt-get build-dep nginxapt-get source nginx

Whatever directory you run that in is where the source and dependency files will appear. I chose /usr/src as that’s where we’ve been working in these previous posts. Mine looks something like this:

user@server:/usr/src# ls -altotal 1888drwxr-xr-x  4 root root    4096 Jul  8 17:20 .drwxr-xr-x 10 root root    4096 Apr 21 09:56 ..drwxr-xr-x 10 root root    4096 Jul  8 17:20 nginx-1.10.1-rw-r--r--  1 root root 1000448 May 31 19:05 nginx_1.10.1-0+xenial0.debian.tar.gz-rw-r--r--  1 root root    2765 May 31 19:05 nginx_1.10.1-0+xenial0.dsc-rw-r--r--  1 root root  909077 May 31 19:05 nginx_1.10.1.orig.tar.gz

Let’s move into the source folder and download the RTMP module:

cd nginx-1.10.1/debian/modules/git clone https://github.com/arut/nginx-rtmp-module

Back up one directory and open rules in a text editor. I’m using nano. Add the module to the end of the --add-module list under the common_configure_flags or full_configure_flags[1] like so:

[common|full]_configure_flags := \                        $(common_configure_flags) \						[...]                        --add-module=$(MODULESDIR)/ngx_http_substitutions_filter_module \                        # NEW MODULE BELOW @                        --add-module=$(MODULESDIR)/nginx-rtmp-module

Now that the module is in, let’s re-compile![2]

cd cd /usr/src/nginx-1.10.1dpkg-buildpackage -uc -b

Depending on how powerful your server is will determine largely how long this takes. I say go get a beverage and come back in a few minutes.

Once it’s done, you’ll get your set of .deb packages:

cd /usr/srcuser@server:/usr/src# ls -altotal 16088drwxr-xr-x  4 root root    4096 Jul  8 17:52 .drwxr-xr-x 10 root root    4096 Apr 21 09:56 ..drwxr-xr-x 10 root root    4096 Jul  8 17:20 nginx-1.10.1-rw-r--r--  1 root root   23788 Jul  8 17:52 nginx_1.10.1-0+xenial0_all.deb-rw-r--r--  1 root root    3756 Jul  8 17:52 nginx_1.10.1-0+xenial0_amd64.changes-rw-r--r--  1 root root 1000448 May 31 19:05 nginx_1.10.1-0+xenial0.debian.tar.gz-rw-r--r--  1 root root    2765 May 31 19:05 nginx_1.10.1-0+xenial0.dsc-rw-r--r--  1 root root  909077 May 31 19:05 nginx_1.10.1.orig.tar.gz-rw-r--r--  1 root root   43932 Jul  8 17:52 nginx-common_1.10.1-0+xenial0_all.deb-rw-r--r--  1 root root   35342 Jul  8 17:52 nginx-doc_1.10.1-0+xenial0_all.deb-rw-r--r--  1 root root  746780 Jul  8 17:52 nginx-extras_1.10.1-0+xenial0_amd64.deb-rw-r--r--  1 root root 6627998 Jul  8 17:52 nginx-extras-dbg_1.10.1-0+xenial0_amd64.deb-rw-r--r--  1 root root  471502 Jul  8 17:52 nginx-full_1.10.1-0+xenial0_amd64.deb-rw-r--r--  1 root root 3806376 Jul  8 17:52 nginx-full-dbg_1.10.1-0+xenial0_amd64.deb-rw-r--r--  1 root root  333962 Jul  8 17:52 nginx-light_1.10.1-0+xenial0_amd64.deb-rw-r--r--  1 root root 2428032 Jul  8 17:52 nginx-light-dbg_1.10.1-0+xenial0_amd64.deb

We’ll need to remove Nginx[3]. As long as we don’t purge, the config files will stay in place. It never hurts to get a backup, anyway, though.

apt-get remove nginx [nginx-core]

Now let’s install our newly compiled version of Nginx[4]:

dpkg --install /usr/src/nginx-[common|full]_1.10.1-0+xenial0.amd64.deb

If it didn’t blow up, we’re in decent shape. To be in even better shape, make sure your moduler was installed by running nginx -V. You should see something like the same line you added to the rules file from earlier (probably at the end):

[...] --add-module=/usr/src/nginx-1.10.1/debian/modules/nginx-rtmp-module

Since we tinkered with Nginx, mark it for version hold[5] so apt-get upgrade doesn’t wipe out our changes:

apt-mark hold nginx-full

That’s all you need to do. Happy sysadmin-ing!


  1. Whichever you pick will dictate which you install. ↩︎

  2. Word of the week, it seems. ↩︎

  3. I had nginx-core installed, so I had to remove that, as well. ↩︎

  4. Pick the flavor depending on where you put the module in the rules file from earlier. If you choose a different flavor, your module won’t be installed. ↩︎

  5. To undo this, use apt-mark unhold ↩︎

The Taste of Reckoning

Blog

I’ve been waiting to review Appletieser for some time and now that sometime is Friday afternoon. In the past, I’ve been blocked by cost-prohibition and the fact that the first time I tried ordering it, Amazon sent me Grapetiser. If you haven’t read that post, yet, go do it. Now.

I found a shop in Los Angeles that sells Appletiser by the can at $2.95. That doesn’t seem like a lot, and in reality it’s about on par with some of the other beverages I purchased. The downside was shipping. It was $12 to send the can roughly 400 miles up the coast of California. I suppose this was still cheaper than driving down there and buying it in person.

When it arrived, I had to let it chill for a bit. UPS trucks don’t have A/C so I hope the fact that it was rather warm when it arrived hasn’t spoiled it.

At first smell, I notice the plesant apple scent followed by… pasta? It’s super subtle, though and might be something else masquerading as something starchy.

At first taste, I’m plesantly surprised. With how Grapetiser turned out, I was expecting Appletiser to fall somewhere in the tastes like cat litter spectrum. I’m sorry I doubted you, Appletiser.

The flavor is subtle and definitely doesn’t match the intensity of an American sparkling apple juice. This likely has to do with the different and less added sugar. I’d bet they use green apples in this beverage and if you’re an apple eater, you know they’re not super sweet but rather tart. It’s as if they took straight pressed apple juice and carbonated it. Nothing fancy.

Plus, there’s no aftertaste. This was something I was genuinely worried about and am happy to know it’s not making an appearance.

Being crisp and fresh without looking gaudy or over the top in packaging is nice to experience. It’s fancy without being fancy, and it seems to be popular enough that people drink it in multiple countries. Judging the packaging by itself, there’s some room for improvement. The can looks very similar to Grapetiser and I wasn’t a huge fan in that camp, either.

Would I drink this again? Probably. Would I pay $12 for a can? Definitely not. If I ever travel to a country that sells it, I’ll definitely be sure to grab some.

Attempting to Stream a Webcam to an RTMP Server

Programming

This a follow up from this article I wrote talking about trying to get ffmpeg + ffserver running the Cat Cam. I abandoned that project and went in search for a new solution. What I came up with was ffmpeg + nginx. Here’s how that worked out.

After a night of streaming failure, I decided I’d give a shot at using ffmpeg to stream to an RTMP server via nginx. RTMP servers are generally pretty basic in that they just relay what they receive to whomever connects. This seems like a pretty straightforward process, from what I can tell. The hardest part would be to get the nginx source bits and compile it with the nginx-rtmp-module. Here’s how we’ll do that.

Quick Side Note

Before we begin, I found out during this process I never compiled ffmpeg with H.264 support. If you didn’t, either, let’s sidetrack for a moment. Run this to find out:

ffmpeg -encoders | grep 264

If H.264 isn’t on the list, then let’s re-compile :

./configure --enable-gpl --enable-libx264makemake installldconfig

NOTE: If you get an error saying it can’t find the library:

ERROR: libx264 not found

then you’ll need to run (with periods):

apt-get install yasm libvpx. libx264.

Once that’s done, verify ffmpeg has H.264 support and let’s move on.

ffmpeg -encoders | grep 264

Nginx

Compile & Install

Since we’re getting the generic nginx from source, we’ll need to make sure some libraries are installed. You can always compile nginx without them, but that’s more work, in my opinion, and could lead to problems, later. You might not need all of these, but the Linux system I’m working on was missing most of this; never hurts to share.

apt-get updateapt-get install libpcre3 libpcre3-dev libssl-dev

We’ll need the nginx source. Pick it up here. I used nginx-1.10.11. I’m a fan of newer versions when possible. and it’s been out for a month, now. I suspect it’s stable.

cd /usr/srcwget http://nginx.org/download/nginx-1.10.1.tar.gztar -xvf nginx-1.10.1.tar.gz

You’ll also need to get the rtmp module, for nginx, as well.

git clone https://github.com/arut/nginx-rtmp-module

Once you have both of those, compile!

cd nginx-1.10.1./configure --add-module=/usr/src/nginx-rtmp-modulemakemake install

Nginx will be installed to /usr/local/nginx.

if you plan on running nginx from the command line and not /usr/local/nginx all the time, you’ll want to create a symbolic link or add the directory to your path. I opted for the link:

ln -s /usr/local/nginx/sbin/nginx /usr/bin/nginx

Configure

Now that compiling is out of the way, let’s modify the nginx.conf to add our rtmp bits. Your config file, located at /usr/local/nginx/conf/nginx.conf should look something like this:

rtmp {server {listen 8081;chunk_size 4000;# HLSapplication hls {live on;hls on;hls_path /tmp/rtmp_hls;}}}# HTTP can be used for accessing RTMP statshttp {server {listen 80;# This URL provides RTMP statistics in XMLlocation /stat {rtmp_stat all;# Use this stylesheet to view XML as web page# in browser# rtmp_stat_stylesheet stat.xsl;}# location /stat.xsl {# XML stylesheet to view RTMP stats.# Copy stat.xsl wherever you want# and put the full directory path here# root /path/to/stat.xsl/;# }location /hls {# Serve HLS fragmentstypes {application/vnd.apple.mpegurl m3u8;video/mp2t ts;}root /tmp;add_header Cache-Control no-cache;}}}events { worker_connections 1024; }

Whatever you set your hls_path, make sure that directory exists. If you have an XML stylesheet (xsl), set that in the location /stat.xsl block, too, and uncomment rtmp_stat_stylesheet.

Kick off nginx and make sure it’s listening on your port:

nginxnetstat -an | grep 8081

If it’s running, you’ll see this:

root@ubuntu:/usr/local/nginx# netstat -an | grep 8081tcp 0 0 0.0.0.0:8081 0.0.0.0:* LISTEN

Now that we have that set up, let’s get a player. I’m opting for JW Player. you can pay money for it, if you want, but you’re really just paying for their service. the Player files are 100% free. This is their official site, but you can also snag it from GitHub.

How you want to implement it is up to you.

Setting Up ffmpeg

Let’s give this another go. The command you can use here is a little more complicated as we’ll need to stream a legitimate video, but here’s the idea:

ffmpeg -i /dev/video0 -framerate 1 -video_size 720x404 -vcodec libx264 -maxrate 768k -bufsize 8080k -vf "format=yuv420p" -g 60 -f flv rtmp://example.com:8081/hls/live

Breaking this down, we have the following:

-i /dev/video0 – The input stream.

-framereate 1 – The number of frames per second in the feed.

-video_size 720x404 – The size of the final video.

-vcodec libx264 – the H.264 codec we’re using.

-maxrate 768k – The max bitrate ffmpeg will use to compress the video. Naturally the higher the bitrate, the more bandwidth you’ll need.

-bufsize 8080k – The buffer ffmpeg will work with. It’ll need this if it can’t get the video out quick enough.

-vf "format=yuv420p" – The colorspace and raw video data format.

-g 60 – Set the distance between the start and end of the Group of pictures2.

-f flv – The container format we’re sending to the server.

-rtmp://… – The absolute URL of the RTMP stream.

We’re Streaming!

At first I was super excited that it started working, but a problem arose, pretty quickly: it’s streaming way behind real time. After about ten minutes, the speed was near 1x, but still not quite there, and I suspect the buffer was quite full. VLC takes about 30 seconds to start playing the video.

At least it’s sorta working, right?

My JW player isn’t able to load the stream, so some tweaks will be needed.

If only…

The RPI3 isn’t strong enough to live-encode H.264. I’d bet lots of dollars that if I had a stronger piece of hardware to work with, It could do it, and this wouldn’t be an issue. I might see what would happen if I used my MacBook as a test3.

I really wish I could embed this into my site. JWPlayer isn’t going to ever be happy with the stream the way it is. A thought that came to mind is going back to motion, using my streaming server to capture the mjpeg stream with ffmpeg, then relay it to the rtmp server in a better format.

Stay tuned for part three when I figure out if that’s worth my time.

Fighting ffmpeg

Programming

Before I begin this seriously long-winded article (I wasn’t expecting it to be nearly 4000 words), I want to let you know that after roughly six hours of poking, yelling at myself, yelling at my screen, yelling at Google for not having the answers I need[1], and wishing I had just bought IP cameras, I realized ffserver doesn’t send mjpeg with the right Content-type header for browser viewing, thus said browsers don’t do the right thing to it.[2]

Typical mjpeg streams are under the Content-type: multipart/x-mixed-replace, whereas ffserver transmits with video/x-mjpeg. That’s fine for players like VLC, but Browsers instinctively download it, versus display it on the screen. What this means for embedding it into a webpage is the stream will download forever, but will only show the first frame. There is no setting for changing this; it’s hardcoded into libavformat/rawenc.c. After All the fuss of figuring out alternatives, I opted to change the value and re-compile.

This means, after the several hours of compiling ffmpeg, I had to re-compile (again)… but will it help? You’ll have to find out.

NOTE: I’ve also added some updates to the end from the day after. With a fresh mind, some fresh ideas came up.

Introduction

Recently I added a Raspberry Pi 3 to my collection of computing tools (mainly for testing arm builds of remote_syslog2)[3]. While it serves that purpose day in and out, such a task isn’t super intensive. While it’s just sitting there, I’d like it to do something Internet related, and what’s better than something involving cats? If you say whiskey, I’ll give you points. Life is pretty much whiskey, cats, and tacos, for me, as a single guy in his late 20s living in California.

This is where The Cat Cam came to life. I find it oddly amusing to sit and watch my cats (mostly sleep) either from somewhere else in my home or while I’m getting tacos. I figured why not let others do the same, typoin a not-at-all-creepy way?

Here’s a pseudo-journal of my progress through this project, including what I’ve learned, and why I did things in certain ways.[4]

Motion: An Option

Doing a cursory Google search for webcam streaming raspberry pi turned up but a few relevant search results[5]. I guess most people use IP cameras, these days? Either way, I was directed to this article on how to build a Raspberry Pi webcam server in minutes. Spoiler alert: it took more than minutes.

The idea is that you download motion and run it, tweaking a few things in the motion.conf settings file. The article is largely useful but I found the service didn’t work and I had to run it under sudo to get it to stream; that makes sense given motion has to open and listen on a couple ports.

A few pointers, here:

  • Turn motion detection off. motion was originally designed to capture video and stills when motion was detected, a la security camera.
  • Watch out for your saved snapshots. Unless you turn this off, too, figure out how to make it overwrite the files it’s saving, automatically clear out all images older than x or have a large SD card installed, you’ll run out of space. The rate of consumption is based on the frames-per-second being captured. Each image from my 1280×720 Microsoft LifeCam Cinema webcam were about 60kb each at 75% JPEG quality. I burned through my 16GB SD card within a few days at one frame per second.
  • You’ll need a reverse proxy if you don’t want to give out your IP address. I opened a port on my router and set up a reverse proxy using nginx so as to mask my home IP. My router isn’t a huge fan of being port-sniffed, as I discovered when testing, so hiding it was a must.
  • The documentation for motion is sparse and seems old. The project is still maintained but if you can’t figure something out by looking in the motion.conf file, you probably won’t figure it out, at all.
  • Tweaking settings after install and while it’s running was easy using the control port. Set a password here, for sure.
  • Motion’s Content-type header and it does output in multipart/x-mixed-replace. Why is that important? You’ll find out later.

Switching to ffmpeg + ffserver

One of the biggest downsides to the configuration I had was the connections to my motion server through my proxy were 1-to-1, meaning there was no relay or retransmission which could save on bandwidth. This would end up saturating my home uplink (capped at 10mbps by my ISP). As much as I’d like to, I can’t devote all of it to cats.

In comes ffmpeg and ffserver. This configuration is similar to motion in that there’s a process capturing the feed from my webcam on /dev/video0 and sending it to a server but the server that’s broadcasting isn’t also on the Pi. Setting this up is broken into two parts: getting ffmpeg on my Pi (and running) and setting up the server and configuring it to accept the feed and rebroadcast.

Using (compiling) ffmpeg on ARM

This is, by far, the longest step in this process. ffmpeg doesn’t come natively compiled for ARM so you’ll find yourself wanting to compile it. If you’ve ever compiled code on a Raspberry Pi before, you’ll know this takes time. The smartest in the group are already finding other things to do for when the hurry up and wait step is next.

Since there wasn’t any specific article that I could find which accurately depicted how to get ffmpeg compiled and working on arm[6], I’ll walk you through it.

I’m using a Raspberry Pi 3 with Raspbian. I’m running all my commands as a regular user with sudo abilities.

Before we do anything else, we need to make sure we have the latest package info and install git. Nothing special about installing it.

sudo apt-get updatesudo apt-get install git

Let’s move into the /usr/src directory and pull down the latest version of ffmpeg. (Note: most of what we do in /usr/src has to be done with sudo rights as /usr/src is owned by root.)

cd /usr/srcsudo git clone https://git.ffmpeg.org/ffmpeg.git

I’m skipping sound[7], but if you need it, grab libasound2-dev.

Now that we have the code pulled down, let’s hop into the folder and compile it.

cd ffmpegsudo ./configuresudo makemake install

If that fails, for you, you’ll need to install libav-tools, you can pick them up using the package manager:

sudo apt-get install libav-tools

Once those are installed, give the compile and install another go.

This step took a couple hours. I don’t know how long, exactly, but it was long enough to where I stopped paying close attention and did other tasks[8].

We’ll need to get source from deb-multimedia.org. This’ll require a few bonus steps, but shouldn’t be too bad. If you’ve ever added extra repositories, you’ll be familiar with this process, if not, no worries. I’ve broken it down.

Add these lines to /etc/apt/sources.list:

deb-src http://www.deb-multimedia.org sid maindeb http://www.deb-multimedia.org wheezy main non-free

Then update, again:

sudo apt-get update

and install deb-multimedia-keyring:

sudo apt-get install deb-multimedia-keyring

It’ll be necessary to get packages from this repo installed properly.

Remove the second source we added earlier, because we don’t need it any longer, and it’ll keep things clean:

deb http://www.deb-multimedia.org wheezy main non-free

then download the source for ffmpeg-dmo:

sudo apt-get source ffmpeg-dmo

This’ll take a few minutes, but once it’s done, you’ll find you now have a usr/src/ffmpg-dmo-xxx folder, where xxx is a version number. Hop into that directory

cd ffmpeg-dmo-xxx

and compile and install it:

sudo ./configuresudo makemake install

This part will take probably just as long as compiling/installing ffmpeg from earlier, so feel free to move on to other cool things. Just don’t forget we’re here. Those cats need to be seen!

Setting Up ffserver on a Streaming Relay Server

While ffmpeg is compiling, we can work on configuring ffserver, the other half of this setup. ffserver comes with ffmpeg so there’s no extra installation of anything. I’m running ffserver on another server, outside my home network.

ffserver doesn’t come configured for anything (it doesn’t even come with a config file) so we’ll need to set one up at /etc/ffserver.conf, as that’s where it looks. (If you want to put it elsewhere, make sure to start ffserver with the -f argument followed by the absolute path to your config file)

In that file, make it look something like this:

HTTPPort nnnnHTTPBindAddress 0.0.0.0MaxClients 250MaxBandwidth 10000file /tmp/stream.ffmFileMaxSize 10MFeed stream.ffmFormat mjpegVideoSize 640x480VideoFrameRate 10VideoBitRate 2000VideoQMin 1VideoQMax 10

If you’re curious as to what all you can tweak, check out this official sample ffserver.conf file from ffmpeg.org.

Breaking down the config file, here’s what I set and why:

HTTPPort nnnn – This will be the port ffserver is listening and serving on. Make sure it’s not in use by anything else and traffic can flow freely in and out of it.

HTTPBindAddress 0.0.0.0 – only useful if you have more than one IP and want to limit it listening on that IP, only. Using 0.0.0.0 lets it listen on all IPs the system knows about.

MaxClients – This is the number of unique connections. ffserver isn’t slow by any means so crank this up and rely on the next setting as your throttle control.

MaxBandwidth – In kbps, set this to where you feel comfortable.

ACL allow n.n.n.n – set this if you want to only allow certain IPs to send their bits as the feed ffserver will process. In this case, when primetime is upon us, I’ll set this to my home IP address as my Raspberry Pi is behind a NAT.

Feed [filename] – This is super important. Make sure this exactly matches what’s set inside the opening “ tag and not the file.

Format [type] – This can be any one of the following types:

mpeg : MPEG-1 multiplexed video and audiompeg1video : MPEG-1 video onlympeg2video : MPEG-2 video onlymp2 : MPEG-2 audio (use AudioCodec to select layer 2 and 3 codec)ogg : Ogg format (Vorbis audio codec)rm : RealNetworks-compatible stream. Multiplexed audio and video.ra : RealNetworks-compatible stream. Audio only.mpjpeg : Multipart JPEG (works with Netscape without any plugin)jpeg : Generate a single JPEG image.asf : ASF compatible streaming (Windows Media Player format).swf : Macromedia Flash compatible streamavi : AVI format (MPEG-4 video, MPEG audio sound)

There are two I would stick with: mjpeg and mpeg. The latter is supported by just about every browser these days and mpeg is a straight up video file. Between the former is easier to embed into a website as it’s just one line:

<img src="http://example.com:1234/stream.mjpeg" />

Your browser will likely know what to do with that[9]. Using mpeg will require a video player. I’d stay away from Flash-based players, these days (unless you have swf as a “, which I’ll cover in a minute) because Flash is garbage, ugly, and has no real purpose in today’s web world.

NOTE: See note number 3 about using mpeg and taking care of your audo stream and note 4 about using non-audio mpegNvideo.

Whatever format you choose, make sure the file extension matches in the opening “ tag.

VideoSize AxB – This is the final frame size, in pixels. Set this to whatever you feel like, keeping within the same ratio of your source. If you have a 16×9 webcam like myself, do some quick math to get a number that’s useful, rounded to the nearest whole number:

width * .5625 = height

You can also use abbreviations for standard sizes:

sqcif, qcif, cif, 4cif, qqvga, qvga, vga, svga, xga, uxga, qxga, sxga, qsxga, hsxga, wvga, wxga, wsxga, wuxga, woxga, wqsxga, wquxga, whsxga, whuxga, cga, ega, hd480, hd720, hd1080

Not sure what they mean? Checkout this Wikipedia article and search for the abbreviation within.

VideoFrameRate 10 – frames per second.
VideoBitRate 2000 – The video bitrate in kbps. Whatever you set here, divide that into your MaxBandwidth from earlier and that’ll effectively be the max number of people that can watch your stream.
VideoQMin and VideoQMax – The higher the number, the better the overall quality. If you set a range between Min and Max, the quality will adjust as needed for best picture. The range is 1 for the highest possible quality and 31 for the lowest.

NOTE: See the section below titled Tweaks to the Feed for how important it is to make sure these settings are right.

A Note about Flash & Fallbacks

If you want to have a fallback for non-HTML5 video player support, set up a second “ and configure it for .swf. This might not be necessary as good Flash-based players will play regular video files. It’s worth thinking about.

If you’re looking for Windows Media Player and other streaming app support on older systems, think about .asf and .rm. They’re old, but ffmpeg and ffserver have been around a long time.

Starting ffmpeg and ffserver

Now it’s time to get this shit started.

We’ve spent all this set up time, and now we’re ready. You’ll want to launch ffserver first. If you put your config file in the default location and gave it the default name (/etc/ffserver.conf), all you need to do is run:

ffserver

And boom. If something’s not right about your config, you’ll know pretty quickly. Yellow text can be ignored as it’s informational. It’s up to you if you want to do anything about it.

The output will look something like this if you did it right:

oot@ubuntu:~# ffserverffserver version N-80901-gfebc862 Copyright (c) 2000-2016 the FFmpeg developersbuilt with gcc 4.8 (Ubuntu 4.8.4-2ubuntu1~14.04.3)configuration: --extra-libs=-ldl --prefix=/opt/ffmpeg --mandir=/usr/share/man --enable-avresample --disable-debug --enable-nonfree --enable-gpl --enable-version3 --enable-libopencore-amrnb --enable-libopencore-amrwb --disable-decoder=amrnb --disable-decoder=amrwb --enable-libpulse --enable-libfreetype --enable-gnutls --enable-libx264 --enable-libx265 --enable-libfdk-aac --enable-libvorbis --enable-libmp3lame --enable-libopus --enable-libvpx --enable-libspeex --enable-libass --enable-avisynth --enable-libsoxr --enable-libxvid --enable-libvidstablibavutil 55. 28.100 / 55. 28.100libavcodec 57. 48.101 / 57. 48.101libavformat 57. 41.100 / 57. 41.100libavdevice 57. 0.102 / 57. 0.102libavfilter 6. 47.100 / 6. 47.100libavresample 3. 0. 0 / 3. 0. 0libswscale 4. 1.100 / 4. 1.100libswresample 2. 1.100 / 2. 1.100libpostproc 54. 0.100 / 54. 0.100/etc/ffserver.conf:5: NoDaemon option has no effect. You should remove it./etc/ffserver.conf:20: Setting default value for video bit rate tolerance = 250000. Use NoDefaults to disable it./etc/ffserver.conf:20: Setting default value for video rate control equation = tex^qComp. Use NoDefaults to disable it./etc/ffserver.conf:20: Setting default value for video max rate = 2000000. Use NoDefaults to disable it./etc/ffserver.conf:20: Setting default value for video buffer size = 2000000. Use NoDefaults to disable it.

So cool!

Launching ffmpeg is almost as simple. We’re going to pass the raw webcam feed so all we need to do is:

ffmpeg -i /dev/video0 http://example.com:1234/stream.ffm

This’ll tell ffmpeg to that what’s coming in from /dev/video0 and pass it to the feed ingestion point on ffserver. If /dev/video0 isn’t available, check for another video device. If you have more than one, they’ll be numbered in the order they were plugged in, in most cases. You can also ask ffmpeg to tell you what devices are plugged in by running:

ffmpeg -devices

The output from ffmpeg will look something like this if you did it right:

pi@rpi3-01:/usr/src/ffmpeg-dmo-3.1.1 $ ffmpeg -i /dev/video0 http://example.com:1234/stream.ffmffmpeg version N-80908-g293484f Copyright (c) 2000-2016 the FFmpeg developersbuilt with gcc 4.9.2 (Raspbian 4.9.2-10)configuration:libavutil 55. 28.100 / 55. 28.100libavcodec 57. 48.101 / 57. 48.101libavformat 57. 41.100 / 57. 41.100libavdevice 57. 0.102 / 57. 0.102libavfilter 6. 47.100 / 6. 47.100libswscale 4. 1.100 / 4. 1.100libswresample 2. 1.100 / 2. 1.100Input #0, video4linux2,v4l2, from '/dev/video0':Duration: N/A, start: 278280.554691, bitrate: 147456 kb/sStream #0:0: Video: rawvideo (YUY2 / 0x32595559), yuyv422, 640x480, 147456 kb/s, 30 fps, 30 tbr, 1000k tbn, 1000k tbc[swscaler @ 0x21bc770] deprecated pixel format used, make sure you did set range correctly[ffm @ 0x21a7e70] Using AVStream.codec to pass codec parameters to muxers is deprecated, use AVStream.codecpar instead.Output #0, ffm, to 'http://example.com:1234/stream.ffm':Metadata:creation_time : nowencoder : Lavf57.41.100Stream #0:0: Video: mjpeg, yuvj422p(pc), 640x480, q=1-10, 1000 kb/s, 30 fps, 1000k tbn, 15 tbcMetadata:encoder : Lavc57.48.101 mjpegSide data:cpb: bitrate max/min/avg: 2000000/0/1000000 buffer size: 2000000 vbv_delay: -1Stream mapping:Stream #0:0 -> #0:0 (rawvideo (native) -> mjpeg (native))Press [q] to stop, [?] for helpframe= 1227 fps= 15 q=24.8 size= 17500kB time=00:01:21.73 bitrate=1754.0kbits/s dup=0 drop=1207 speed= 1x

When I first wrote this, I was concerned about drop= being anything more than zero, but as it turns out, dropped frames are any that aren’t converted and transmitted. If you’re transmitting at 1fps, you’ll have 29 dropped frames.

If ffserver isn’t running or isn’t listening on the port, you’ll get this message:

[tcp @ 0x18e7410] Connection to tcp://example.com:1234 failed: Connection refusedhttp://example.com:1234/stream.ffm: Connection refused

You’ll get a similar message, but about a broken pipe if ffserverwas running and ffmpegwas sending to it, but it subsequently disappeared, was killed, or the connection was otherwise terminated in a ungraceful fashion.

Tweaks to the Feed

Sending the raw feed could be a bit bandwidth-intensive if you’re on something like a DSL connection. My 1280×720 raw feed transmits at about 220KB/second. That’s about 17% of my uplink (10mbps). Depending on what you set for quality, size, and bitrate in the “ portion of your ffserver config will reflect the rate at which ffmpeg transmits[10].

That Damn Buffer

The buffer might be large, so there could be a significant time shift. In my testing, 720×405 at 30fps yielded close to one minute behind live. This might not be a huge deal for you, but it was, for me. I suspect this was because of my Raspberry Pi’s inability to encode and transmit quick enough.

MPEG

  • If you use mpeg, you’ll want to either make sure you have a valid audio stream set up, or you’ve disabled it in your “ config using the NoAudio setting.
  • I had a hard time getting Format mpeg to work. ffmpeg couldn’t figure out what to do with the stream. I did some digging and it turns out mpegvideo isn’t a legit option, but mpeg1video and mpeg2video are.

Bandwidth

Bandwidth was a huge issue for me in the early days (read: a few days ago) so this was something I wanted to definitely pay attention to, now. I tested a few different combinations of sizes and frames/second. A few examples I saw in terms of bandwidth needed:

Dimensions Frames/second Quality Bitrate Observe `buffer underflow`
1280×720 1 min: 1/max: 20 ~520kbps no
1280×720 15 min: 1/max: 20 NaN[^11] yes
720×405 30 min: 1/max: 20 ~2100kbps no

In the beginning, there’ll be a spool up as `ffmpeg` tries to get the feed to `ffserver` and get it caught up to live.

Embedding the Stream

This is where we run into trouble. If you read my Pre-Introduction, you’ll know I had to recompile ffmpeg/ffserver to change the Content-type header to multipart/x-mixed-replace.

Real-Time Streaming Protocol

I tried this, too. I won’t mention all the specifics but I could never get this to work. I’m not sure what I was doing wrong but ffserver wasn’t ever able to connect to it’s own stream to push it out via RTSP. I suppose you can’t win them all.

Re-compile ffmpeg for the Custom Content-type

I thought all I’d need to do was re-compile ffmpeg on my Pi and I’d be set. I was partially on-target, there. I definitely needed to recompile to change the header, but on the relay server (where I’m running ffserver, instead. Instead of installing via apt-get, I had to follow the same instructions as I outlined for the Pi.

For my system, I had to install a few extra packages:

apt-get install build-essential gcc yasm pkg-config

Replacing the Content-type in libavformat/rawenc.c wasn’t enough, though as the server still responded with video/x-mjpeg. At this point, I’m not entirely sure what to do. I’m thinking about finding another route.

At the End of the Day

Ugh. This turned out to be more complicated than I originally anticipated. In my mind, it seemed easy to transmit the data from the webcam to a relay server. My original-1 idea was to take the mjpeg from motion, save it on my web server, link to it with an <img /> tag, then use jQuery to refresh the <img /> with cache-breaking.

Content-type, Content-type, Content-type! I’m going to have dreams about HTTP headers.

Alternatives

After originally posting this, I started searching for new options. I found stream-m and might give that a shot.

Feedback

If you see anything I might have missed, let me know in the comments, below.

Update: The Day After

After thinking about this some more (#showerthoughts), I remembered we had also installed ffmpeg-dmo, which likely was still configured improperly. After updating and re-compiling it, I was able to get the new header, but the browser didn’t display the video as a video, but rather binary garbage. VLC was still able to play it.

This led me to look into the boundary= attribute for the Content-type header. After flipping this on using -strict-mime-broundary true on the sending side, it didn’t help. I suspect having the Content-type be multipart/x-mixed-replace isn’t actually helping. It seems odd that’s the case, though, as Motion returns as that content-type. I’ll have to look into it, more.

I swapped the mime-type back to video/x-motion-jpeg and re-compiled on the Pi. While this was going on, I looked into other options. I mentioned stream-m earlier, and originally liked the idea, but in order to make it work I still have to use ffmpeg as the sender and compile it, myself. There’s no instructions on how to do that and I’ve never compiled Java before. Today isn’t going to be the day that I learn.

With all that being said, I’m scrapping the ffmpeg + ffserver idea. Be on the lookout for a follow-up post where I give RTMP streaming a shot.


  1. All with lots of profanity, I promise. ↩︎

  2. Internet Explorer is a little shit eater so it won’t do the right thing, ever. ↩︎

  3. I work for Papertrail. I also help maintain the remote_syslog2 repo. They are not mutually exclusive. ↩︎

  4. I’m sorry if this is in complete disarray. By the time I got to the end of it, I didn’t really care how organized it was. If anything, it’s more of a chronological timeline of events. ↩︎

  5. This is clearly something nobody does, ever. /s ↩︎

  6. That was not out of date ↩︎

  7. ffmpeg doesn’t have the libraries to make that happen ↩︎

  8. Grand Theft Auto V, pretty much. ↩︎

  9. Oh how naieve I was when I wrote that. ↩︎

  10. This can lead to things like buffer underflow if the device can’t churn quick enough. ↩︎



Johnathan Lyman
Kenmore, WA,
United States
 
blogging, design, technology, software, development, gaming, photography