Memory leak in Zimbra 8.0.6 webmail

I’ve had a suspicion that since the Zimbra 8.0.6 update, something’s been wonky with Zimbra’s webmail client, so I decided to perform a very simple test: open Zimbra Webmail and leave it running.

Here is the outcome of that test.

Normal Operation for one business day.
This is how I operate day after day in my normal office environment.

Running:

  • Zimbra 8.0.6_GA_5922
  • Chromium Version 32.0.1700.123 Debian 7.4 (248368)

Memory Usage at Application Launch:

  • Browser window with Zimbra webmail client
    Thursday 8:47am – 139.5MB
  • Browser window with Google
    Thursday 8:55am – 45.92MB

Memory Usage ~ 24 Hours Later:
I left both browser windows running overnight. Here is where the memory usage stands…

  • Browser window with Zimbra webmail client
    Friday 8:28am – 564.3MB – 304% Increase in 24 Hours
  • Browser window with Google
    Friday 8:29am – 60.78MB – 32% Increase in 24 Hours

Memory Usage ~4 Days Later:
I left both browser windows running over the weekend.  Here is where the memory usage stands…

  • Browser window with Zimbra webmail client
    Monday 10:32am – 1.6 GB – 1,046% Increase in ~96 Hours
  • Browser window with Google
    Monday 10:33am – 60.53MB – 31% Increase in ~96 Hours

Since Zimbra cut the Evolution Connector from its product line, and the Zimbra Desktop software is still only available for a 32-bit platform, this leaves Zimbra operation on Linux sorely lacking. What has VMWare done?!  Hopefully Telligent can fix it.

-Robbie

Preventing rsync from doubling–or even tripling–your S3 fees.

Using rsync to upload files to Amazon S3 over s3fs?  You might be paying double–or even triple–the S3 fees.

I was observing the file upload progress on the transcoder server this morning, curious how it was moving along, and I noticed something: the currently uploading file had an odd name.

My file, CAT5TV-265-Writing-Without-Distractions-With-Free-Software-HD.m4v was being uploaded as .CAT5TV-265-Writing-Without-Distractions-With-Free-Software-HD.m4v.f100q3.

I use rsync to upload the files to the S3 folder over S3FS on Debian, because it offers good bandwidth control.  I can restrict how much of our upstream bandwidth is dedicated to the upload and prevent it from slowing down our other services.

Noticing the filename this morning, and understanding the way rsync works, I know the random filename gets renamed the instant the upload is complete.

In a normal disk-to-disk operation, or when rsync’ing over something such as SSH, that’s fine, because a mv this that doesn’t use any resources, and certainly doesn’t cost anything: it’s a simple rename operation. So why did my antennae go up this morning? Because I also know how S3FS works.

A rename operation over S3FS means the file is first downloaded to a file in /tmp, renamed, and then re-uploaded.  So what rsync is effectively doing is:

  1. Uploading the file to S3 with a random filename, with bandwidth restrictions.
  2. Downloading the file to /tmp with no bandwidth restrictions.
  3. Renaming the /tmp file.
  4. Re-uploading the file to S3 with no bandwidth restrictions.
  5. Deleting the temp files.

Fortunately, this is 2013 and not 2002.  The developers of rsync realized at some point that direct uploading may be desired in some cases.  I don’t think they had S3FS in mind, but it certainly fits the bill.

The option is –inplace.

Here is what the manpage says about –inplace:

This option changes how rsync transfers a file when its data needs to be updated: instead of the default method of creating a new copy of the file and moving it into place when it is complete, rsync instead writes the update data directly  to  the destination file.

It’s that simple!  Adding –inplace to your rsync command will cut your Amazon S3 transfer fees by as much as 2/3 for future rsync transactions!

I’m glad I caught this before the transcoders transferred all 314 episodes of Category5 Technology TV to S3.  I just saved us a boatload of cash.

Happy coding!

- Robbie

Running phpcs against many domains to test PHP5 Compatibility.

Running a shared hosting service (or otherwise having a ton of web sites hosted on the same server) can pose challenges when it comes to upgrading.  What’s going to happen if you upgrade something to do with the web server, and it breaks a bunch of sites?

That’s what I ran into this week.

For security reasons, we needed to knock PHP4 off our Apache server and force all users onto PHP5.

But a quick test showed us that this broke a number of older sites (especially sites running on old code for things like OS Commerce or Joomla).

I can’t possibly scan through billions of lines of client code to see if their site will work or break, nor can I click every link and test everything after upgrading them to PHP5.

So automation takes over, and we look at PHP_CodeSniffer with the PHPCompatibility standard installed.

Making it work was a bit of a pain in the first place, and you’ll need some know-how to get it to go.  There are inconsistencies in the documentation and even some incorrect instruction on getting it running.  However, a good place to start is http://techblog.wimgodden.be…..

Running the command on a specific folder (eg. phpcs –extensions=php –standard=PHP53Compat /home/myuser/domains/mydomain.com/public_html) works great.  But as soon as you decide to try to run it through many, many domains, it craps out.  Literally just hangs.  But usually not until it’s been running for a few hours, so what a waste of time.

So I wrote a quick script to help with this issue.  It (in its existing form – feel free to mash it up to suit your needs) first generates a list of all public_html and private_html folders recursive to your /home folder.  It then runs phpcs against everything it finds, but does it one site at a time (so no hanging).

I suggest you run phpcs against one domain first to ensure that you have phpcs and the PHPCompatibility standard installed and configured correctly.  Once you’ve successfully tested it, then use this script to automate the scanning process.

You can run the script from anywhere, but it must have a tmp and results folder within the current folder.

Eg.:
mkdir /scanphp
cd /scanphp
mkdir tmp
mkdir results

And then place the PHP file in /scanphp and run it like this:
php myfile.php (or whatever you ended up calling it)

Remember, this script is to be run through a terminal session, not in a browser.

See what we’re doing there?  Easy breezy, and solves the problem when having to run phpcs against a massive number of domains.

Let me know if it helped!

- Robbie

Walk-in Wifi Responder

Had a thought this morning that wifi could be used to do some pretty rad stuff… like detecting when I get home by noticing my iPod touch.

Since most of us carry wifi-enabled devices with us at all times, and most of us have them set to auto-connect once in range of our routers, I thought, why not use that data?  It could be as simple as logging coming and going, or as sophisticated as automatically turning on my favorite music when I walk in the door.  Or even adjusting the thermostat when I arrive home to save energy when nobody is around.

As a very brief proof of concept I whipped out a simple algorithm in PHP which can be run from any Linux computer on your network.

Usage:  php wifi-check.php –device=devicename

My thinking is to put something like this in a looping script and let it run every so many seconds or something, calling particular functions if the device is detected as active vs inactive.

I’d welcome your thoughts in the comment section below.  What practical things could this be used for?