Enough with the content algorithms!

I’m writing this because I’ve had enough of the mindf***ing algorithms that every single social media service employs these days, in varying flavors. What do I mean?

Well, have you indicated your preference for something on Facebook? Are you surprised by the fact that the posts you see are always geared toward those preferences? Are you surprised when the ads you see are also about the stuff you might be using or want to buy? Are you surprised that you see virtually nothing from stuff you didn’t indicate that you like or are interested in? Are your surprised when you see an ad along the very same lines laid out above, interspersed between every 3-4 posts, and it’s a video ad that repeats, over and over and over, until you have to hide it and also tell Facebook to hide all ads from that brand, but then a different ad for that same product pops up again from another account, and you have to hide that and hide all from that brand, only to go through the same s**t, day in and day out?

Have you viewed a few videos on YouTube on a particular topic, say the latest digital cameras, and now your YouTube homepage is filled with videos on that topic? How about the recommended videos in the sidebar? Did you get enough of that topic the first time around and already made your decision, but now you can’t seem to be rid of videos about digital cameras that make you doubt your decision, with reviews where “experts” are yelling at you that this other model is better, so much better than the other model you want to buy, and by the way, they have an affiliate link in the description that you should click on when you buy it? Do you struggle to find other content now, because all that YouTube recommends to you are more videos on digital cameras with more “experts” voicing their “opinion”? Are you afraid to search for some other stuff on YouTube because you know that for the next few weeks, you’ll be inundated with more videos on those very same keywords, even though you’ve already seen all you ever wanted to see?

Have you posted photos of a watch or a pen on Instagram, only to see tons of ads for watches and pens, and get recommendations to like more accounts on watches and pens? Do you find it hard to see anything else on Instagram, because that’s pretty much all they’ll shove down your throat, putting ads for watches and pens between every 2-3 actual posts (for watches and pens)?

Isn’t AI fun? Isn’t social media fun? Don’t you love how it’s catered to your very needs, even though you don’t know they’re your needs and you don’t want them to be your needs, but they’ll be your needs goddamit because that’s what the social media algorithms are force-feeding you?

Well, f**k all this s**t. I’ve had enough. Facebook, Google, you guys need to adjust your algorithms. This is absolutely ridiculous. The world is a varied place. Humans are varied, diverse individuals. Just because one day we want to see a video about [insert topic here], it doesn’t mean we want to see more videos on that same topic later in that same day, or the next day, or every damned day for the next few weeks, until your algorithms figure we’ve had enough. And we definitely don’t want to see ads for that s**t haunting us whenever we use your services and your websites and wherever else we might go (yes Adwords and Facebook Pixel, I’m talking about your omnipresent ads for whatever product we might have once seen somewhere). We want variety. We need variety. We need to see and experience opposing viewpoints on a topic. Sameness, day in, day out, is a real mindf**k. It’s not the real world, but since we tend to experience the world through social media, the responsibility falls on you to represent the real world in a real manner.

This has got to stop. These algorithms have got to be changed. They need to become more human. Do you realize you can drive someone mad with your code, haunting them with more and more and more on something they only wanted to see once, something they can’t be rid of now? Do you realize you should be held responsible for the mental health of the people who use your services? It’s high time that fact dawned on you. Change your practices! Do it now.

SmugMug, are you listening?

I’m disappointed with SmugMug over their continued lack of support for proper export and maintenance of photographs directly from Lightroom. Back in July, I wrote about the Flickr Publish Service in Lightroom, and wondered when SmugMug would introduce their own.

What I was really looking for (and I said this in the post) was a way for the publish service to identify what I’ve already uploaded and allow me to re-publish those photos where I’ve made changes to the metadata or to the processing. The official Flickr Publish Service didn’t offer that option.

A few of my readers (Gary, Chris, Russell, thanks!) pointed me to Jeffrey Friedl’s excellent plugins for Lightroom, and I’ve been using them ever since. As a matter of fact, I’ve switched over to them completely. I use them for all four web services where I currently publish photos (SmugMug, Flickr, Facebook and PicasaWeb). I don’t know what I’d do without them. Wait, I do know — I know for sure I’d be doing a LOT more work and spending a LOT more time uploading and maintaining my online collections.

With Jeffrey’s LR plugins, I was able to identify about 90% of the photos already uploaded to SmugMug, and about 75% of the photos already uploaded to Flickr. In the case of Flickr, I then did manual updates and re-identifies so I could get it to know 95% of the photos already uploaded. This means Lightroom now allows me to quickly identify, update and replace almost any photos I’ve got at SmugMug, Flickr, Facebook and PicasaWeb. This is huge.

There is a catch, though, and it’s a BIG one. I keep running into the same “Wrong Format ()” error with SmugMug, which means I still haven’t been able to straighten out the photos I’ve uploaded to them. Here are a couple of screenshots of the error messages I get. It starts with a “TimedOut” error, then I get the “Wrong Format ()” error, then the upload process aborts.

I get these errors almost every time I try to re-publish an updated photo, but I don’t get them as often when I try to upload new photos. To give you an idea of how bad things are, I’ve currently got 109 photos to update in one of my galleries at SmugMug, and last night, I had about 167 photos. I’ve had to restart the re-publish process about 30-40 times since last night. You do the math, but I think it works out to 1-2 photos per error. This sucks. I should be able to just click the Publish button and walk away, knowing all of my changes will propagate correctly.

I’ve contacted Jeffrey, and I’ve contacted SmugMug. I’ve had extensive email conversations with each. SmugMug alternates in their replies. They’ve said the following to me:

  • It’s a fault with the plugin
  • It’s something on their end but they’re working on it
  • There’s nothing they can do about it
  • I should use something else to upload photos
  • They blamed my setup, which we ruled out after some internet connectivity tests

Jeffrey says there’s nothing he can do about it, and I believe him more than I believe SmugMug. Want to know why? Because his other plugins work just fine. I’m able to re-publish updated photos to Flickr and Facebook and PicasaWeb without any problems. Only SmugMug somehow can’t handle my uploads.

I’ve tried reloading the plugin, installing it anew, removing and re-adding the publish service, upgrading the plugin, but nothing. I still get the same errors.

My question for the smug folks at SmugMug is this: how is it possible that Facebook and Flickr and PicasaWeb have worked out the re-publish issues, but you haven’t? What’s taking you so long? Why can’t you work out the same problem on your end?

I was hoping that with the release of Lightroom 3.2, and the release of the official SmugMug Publish Service for LR (hat tip to David Parry for the advance notice), that SmugMug would work out the kinks in their API, but it looks like they still haven’t done it. I tried their plugin, but of course they took the easy route, like Flickr, and haven’t introduced any functionality that would identify photos already uploaded to their service. Only Jeffrey Friedl’s plugins offer this feature.

This leaves me terribly disappointed. As a SmugMug Pro, I don’t want to bother with error messages. I don’t want to bother with posts like this. I’d rather post photographs and update my SmugMug galleries in peace, but I can’t.

If you’re having the same problems with SmugMug, please, write to them, and ask them when they’re going to get their act together. This problem’s existed for several months. How much more time will it take until they deal with it?

Site migration complete

Last night, I completed what could be called an unusual site migration. I went from a self-hosted WP install to WP.com. That’s right, my full site is now hosted at my WP.com account. People usually migrate from WP.com to WP self-installs after their site gets big and they decide they want more options, like the ability to run all sorts of ads and fiddle with the code, etc. With me, it was the opposite. I wanted to stop worrying about my web server and focus on publishing my content.

As I mentioned here, things got worse after upgrading to WP 2.9. My server kept going down for no reason, and often, too. It’d go down several times a day. I’d have to keep watching it all the time, and that got old real quick, especially when I traveled and had no internet access. I’d often get home to find out my site was down and had been down for several hours, if not more. Since I hadn’t mucked about with my server to make things worse, and had already fiddled with optimized my Apache, MySQL and PHP settings to last me a lifetime, I decided to have WP have a go at hosting my site and let them worry about keeping it going. Judging by the initial results, it looks like they had a bit of trouble with it too (see this, this, this and this), but at least it’s not my headache anymore.

During the migration process, I learned three things:

  1. I hadn’t been getting full XML transcripts of my site in the past, when I used WP’s WXR Export feature. See this for more, and make sure you’re not in the same boat.
  2. The WordPress Import wizard still needs a TON of work to iron out the bugs. You’ll see why below.
  3. WordPress.com Support can be terribly unresponsive. I waited over 20 days for a resolution to my ticket about the site migration, and in the end, I had to work things out myself. When I told them as much — and I tried to be as nice as possible about it — it would have been nice to get a small apology, but I didn’t even get that.

Granted, my site migration does not represent the usual WP user’s migration path, nor was it a typical migration. By current count, I have 1,552 posts, 4,129 comments and 3,090 media files. That’s quite a bit more than your average blogger, and I think that’s what served to point out the bugs in the Import Wizard.

What exactly were the bugs?

  • Failure to import all posts, comments and media files
  • Post and media file duplication
  • Failure to properly change all paths to media files (either image source or image link or both)

Here’s where I need to acknowledge the help I did receive from WP Support. My WXR file was over 20 MB. The WXR upload limit at WP.com is 15 MB. WP Support modified the upload limit to allow me to go through with the WXR upload, and they also adjusted the timeout limit, because the migrations timed out prematurely as well. So I thank them for that help.

The big problem turned out to be the third issue mentioned above. The Import Wizard didn’t change all the paths to the image files. It turned out to be a very hit-or-miss operation. Given the scale of the operation, I might even call it a disaster. Some posts were fine, some weren’t at all, and some were a hodge-podge of images that were okay, and images whose paths were wrong, or whose links were wrong, or both. You might imagine that checking and fixing the image paths for over 3,000 media files can turn out to be a very big job, and it was.

I was also under pressure to finish the job quickly, since the site was live. Imagine how you’d feel as a reader if you visited a website and none of the image files showed up — you’d probably think the site was dead or dying, right? Well, I certainly didn’t want people to think my site was on its last legs, so I had to act quickly.

Thankfully, only (sic) about 40% of my posts had their image files messed up. The rest were fine, but then I also had plenty of posts with no images. If all my posts contained images, I might have had 90% of my posts to worry about… Still, I had to check every post, and as you might know if you’re a regular reader, I post lots of images per post, and where a post was messed up, brother, I had to do a bunch of work to get it fixed up. Just as an example, some posts have anywhere from 20-50 images…

Here are a couple of screenshots that show you how things stood. Here, the image link was okay, which meant I didn’t have to modify it. This was a happy scenario. However, the image path was still wrong, as you’ll see below.

The image source, or path, didn’t change during the import process, which meant I had to change it manually, or browse for the image by title or file name in the media library and re-insert it.

The image size was also lost, which meant that if I changed the image path manually, I had to also enter the width of the image.

What made things more cumbersome was the lack of an image insert button in the Gallery dialog box. That’s one of the differences between a WP self-install and WP.com. This meant that even though I’d uploaded a certain image for a certain post, and it showed on the Gallery tab, I couldn’t go there and re-insert it into a post. I had to go to the Media Library tab, search for it, then re-insert it, which takes precious time and clicks, particularly when you’re dealing with thousands of images.

In spite of all the extra work which I had to do, and which took about 1½ weeks of my time, I got done last night. My site is now fully functional, thank goodness!

As for my experience with WP Support, there are no hard feelings. I like the WordPress platform and it’s done good by me so far. I wasn’t a VIP customer and they didn’t have any financial incentives (besides the small fees for a space upgrade and a domain mapping) to get their hands dirty with my code. They offered minimal support, and to a certain degree, that’s to be expected when most of your customers are non-paying customers, as is the case with the large majority of WP bloggers.

Still, I would encourage them to consider doing the following:

  • Improve their Import Wizard so that it will not terminate until it checks and doublechecks to make sure it has imported all the posts, comments, pages, tags, categories and media files, and all the paths to the media files are correct. They’ve still got one of my WXR files, and they can use it as case study to help improve the accuracy of the import wizard.
  • Include an image insert button on the Gallery tab of the “Add an Image” dialog box, like the one that already exists on WP self-installs.
  • Offer the functionality of the Search & Replace WP plugin for WP.com blogs. This would have been a huge help to me as I fixed the image paths. I could have run a couple of queries on my blog’s content to change most of the image paths, and it would have halved my workload.

If you were one of the folks who kept seeing no images during this transition period, sorry for the inconvenience, and I’m glad you’re still around. If you’re still seeing no images, definitely get in touch with me, I might have missed a few — after all, I’m only human.

Are you really backing up your WP blog?

When those of us with self-hosted WordPress blogs back up our content using the built-in WXR functionality, do we ever check the downloaded XML file? Until recently, I didn’t worry about it. I’d click on the Export button, copy the WXR file to a backup folder and think my blog was safe, but I was wrong.

You see, what may be happening is the creation of the WXR file on the server side may be terminated before all the content gets written to it, and we’ll end up with a partial backup of our blogs. This is no fault of the WordPress platform, but will happen when the server settings don’t allow enough resources to the PHP script which writes out the XML file. When that’s the case, this is what the end of the WXR XML file looks like.

In the screenshot you see above, the script ran out of memory (I’d set PHP’s memory_limit at 64 MB, which was too little for my needs), but it can also run out of time, if PHP’s max_execution_time is set too low.

Depending on your scenario, you may or may not have access to the original php.ini file on your web server. If not, check with your host, you may be able to create a php.ini at the root of your hosting account to adjust these parameters (within limits). The thing to do is set the memory_limit and the max_execution_time high enough to allow PHP enough resources to generate the full WXR file. I can’t prescribe any specific limits here, because the amount of memory and time the script needs depends on how big your blog is. All I can suggest is that you experiment with the settings until they’re high enough for the WXR file to generate fully. You don’t want to set them too high, because your server will run out of memory, and that’s not fun either. This is what my setup looks like.

What happens if you use a cheap web host is that you’ll get crammed along with hundreds of other sites on a single virtual server where all the settings are tightly reined in, to make sure no one’s hogging any resources. Chances are that as your blog grows, your WXR file will get too big and will need more resources than are available to write itself, which means you’ll start getting truncated backup files. If you never check them by opening up the XML and scrolling to the end to rule out any error messages, you’re not really backing up your blog.

Keep this in mind if you want to play it safe. Always check the WXR file. A good backup should close all the tags at the end, particularly the tag, like this screenshot shows.

Site optimization — the order of your scripts and styles

I watched this video yesterday, where a Googler talks about the importance of ordering your scripts and styles correctly in order to speed up the rendering of your website, made a quick change to my header file, then ran the Page Speed extension for Firefox to see how I was doing. While there still some things to address that could make my site load faster, some of which don’t depend on me but on external JavaScript files from ads and stats and such, I’m glad to see things are a little snappier today.

Google Webmaster Central — Optimizing the order of scripts and styles

There’s extra documentation on this very topic available from Google, in the help files for its Page Speed extension. It’s worth a read, because a quick re-ordering of the code in your site’s header could shave as much as 50% off your site loading times, depending on how much JavaScript you’re using.

Changed all site URLs

I pushed through some major changes to site URLs tonight. Every single site URL has now changed as a result, but the change is good, and all old URLs should still work just fine, seamlessly redirecting visitors to the new URLs. Just in case though, please let me know if you find a non-working URL.

The changes have to do with how post, category and tag URLs appear. WordPress, my site’s platform, allows me to change URL rewrite rules (the way a certain URL is generated when you visit a page on my site). I’ve wanted to make this change for a long time, and finally bit the bullet after first trying it out on one of my other sites, Dignoscentia.

Here’s what this means for you:

These changes may not be important to some, but they are to me. Once I get something like this in my head, something that I think will help me organize my content a little better and make the URLs a little shorter and easier to type, I have to go through with it.

I have some more changes planned for the actual categories themselves, such as re-organizing my content into more logical categories. I also need to finish tagging all my posts (currently 1242 posts and counting).

This is all part of my long-term efforts to properly curate my content. You may want to have a look at the site news tag to see what other changes I’ve made to the site over time. It’s been an interesting journey with quite a bit of work behind the scenes, but I like doing this sort of stuff a lot.

Google Health is a good thing

When it launched a few weeks ago, Google Health received fairly lackluster reviews. Privacy issues and lack of features were the main complaints. Well, I’m here to tell you those initial views are wrong.

Even if you’re a long-time reader of my site, you may not know what qualifies me to make that statement, so let me tell you a bit about myself.

My background

A few years ago, I was Director of Health Information Systems at a South Florida hospital, where I implemented an electronic medical records system. My job was fairly unique, because I not only wrote the policies and procedures for the system and oversaw its implementation, but I also rolled up my sleeves and built the various screens and forms that made it up. I, along with my staff, also built and maintained the servers and databases that housed it.

As far as my education is concerned, I hold a Master’s Degree in Health Services Administration (basically, hospital administration). I was also admitted to two medical schools. I ended up attending one for almost a year until I realized being a doctor wasn’t for me, and withdrew.

For plenty of years, I’ve been a patient of various doctors and hospitals, as have most, if not all of you, for one reason or another.

Furthermore, my father is a doctor: a psychiatrist. He has a private practice, and also holds a staff job at a hospital. My mother handles his records and files his claims with the insurance companies, using an electronic medical records system. I get to hear plenty of stories about insurance companies, billing ordeals, hospitals and the like.

So you see, I’ve seen what’s involved with medical records and access to said records from pretty much all sides of the equation. Again, I say to you, Google Health is a good thing, and I hope you now find me qualified to make that statement.

The benefit of aggregation

Just why is it such a good thing? Because I wish I could show you your medical records — or rather, their various pieces — but I can’t. That’s because they exist in fragments, on paper and inside computer hard drives, spread around in locked medical records facilities or in your doctors’ offices, all over the place. If you endeavored to assemble your complete medical history, from birth until the present time, I dare say you’d have a very difficult time getting together all of the pieces of paper that make it up — and it might not even be possible. That’s not to mention the cost involved in putting it together.

A few of the problems with healthcare data sharing

Do you know what my doctor’s office charges me per page? 65 cents, plus a 15 cent service fee. For a 32 year old male (that’s me) it would take a lot of pages (provided I could get a hold of all of them) and a lot of money to put my medical record together.

The sad part is that this is MY medical information we’re talking about. It’s information that health services workers obtained from MY body. It’s MY life and MY record, yet I can’t have access to it unless I fill in a special form at every doctor’s office I’ve ever visited, and pay for the privilege. Is that fair? NO. Can something be done about it? YES, and so far, Google Health is the only service I’ve seen that is trying to pull together all of the various pieces that make up my medical record, for my benefit and no one else’s. Sure, the system is in its infancy, and there’s a lot of work to be done to get it up to speed, but that’s not Google’s fault.

I’ve been inside the healthcare system, remember? I know how things work. I know how slowly they work, to put it mildly. I know how much resistance to change is inherent in the system. Just to get medical staff to use an electronic medical records system is still a huge deal. The idea of giving the patient access to the records, even if it involves no effort on the part of the medical staff (but it does, as you’ll see shortly) is yet another big leap.

Let’s also not forget to consider that medical records systems are monsters. Each is built in its own way. There are certain lax standards in place. Certain pieces of information need to be collected on specific forms. The documentation needs to meet certain coding standards as well, or the hospitals or doctors’ offices or pharmacies won’t get reimbursed. There are also certain standards for data sharing between systems, and the newer systems are designed a little better than older ones.

Yet the innards of most medical health systems are ugly, nasty places. If you took the time to look at the tables and field names and views and other such “glamorous” bits inside the databases that store the data, you’d not only find huge variations, but you’d also find that some systems still use archaic, legacy databases that need special software called middleware just so you can take a peek inside them, or form basic data links between them and newer systems. It’s a bewildering patchwork of data, and somehow it all needs to work together to achieve this goal of data sharing.

The government is sort of, kind of, pushing for data sharing. There’s NHIN and the RHIOs. There are people out there who want to see this happen and are working toward it. Unfortunately, they’re bumping up against financial and other barriers every day. Not only are they poorly funded, but most healthcare organizations either do not want or cannot assign more money to either getting good record systems or improving their existing ones to allow data sharing.

Add to this gloriously optimistic mix the lack of educated data management decisions made in various places — you know the kind of decisions that bring in crappy systems that cost lots of money, so now people have to use them just because they were bought — and you have a true mess.

Oh, let’s also not forget HIPAA, the acronym that no one can properly spell out: Health Insurance Portability and Accountability Act. The significant words here are Insurance and Accountability. That’s government-speak for “CYA, health organizations, or else!” There’s not much Portability involved with HIPAA. In most places, HIPAA compliance is reduced to signing a small sticker assigned to a medical records folder, then promptly forgetting that you did so. Your records will still be unavailable to you unless you pay to get them. Portability my foot…

Benefits trump privacy concerns

Alright, so if you haven’t fallen asleep by now, I think you’ve gotten a good overview of what’s out there, and of what’s involved when you want to put together a system like Google Health, whose aim is to pull together all the disparate bits of information that you want to pull together about yourself. Personally, I do not have privacy concerns when it comes to Google Health. There are more interesting things you could find about me by rummaging through my email archives than you could if you went through my health records. If I’m going to trust them with my email, then I have no problems trusting them with my health information, especially if they’re going to help me keep it all together.

Not sure if you’ve used Google Analytics (it’s a stats tool for websites). Not only is it incredibly detailed, but it’s also free, and it makes it incredibly easy to share that information with others — should you want to do it. You simply type in someone’s email address in there, and you grant them reader or admin privileges to your stats accounts. Instantly, they can examine your stats. Should you prefer not to do that, you can quickly export your stats data in PDF or spreadsheet format, so you can attach it to an email or print it out, and share the information that way.

I envision Google Health working the same way. Once you’ve got your information together, you can quickly grant a new doctor access to your record, so they can look at all your medical history or lab results. You’ll be able to easily print out immunization records for your children, or just email them to their school so they can enroll in classes. A system like this is priceless in my opinion, because it’ll make it easy to keep track of one’s health information. Remember, it’s YOUR information, and it should NOT stay locked away in some hospital’s records room somewhere. You should have ready access to it at any time.

Notice I said “whose aim is to pull together all the disparate bits of information you WANT to pull together” a couple of paragraphs above. That’s because you can readily delete any conditions, medications or procedures you’d rather keep completely private from Google Health. Should you import certain things into it that you don’t feel safe storing online, just delete that specific thing, and keep only the information you’d be comfortable sharing with others. It’s easy; try it and see.

Lots of work has already been done

Another concern voiced by others is that there isn’t much to do with Google Health at the moment — there isn’t much functionality, they say. I disagree with this as well. Knowing how hard it is to get health systems talking to each others, and knowing how hard it is to forge the partnerships that allow data sharing to occur, I appreciate the significant efforts that went on behind the scenes at Google Health to bring about the ability to import medical data from the current 8 systems (Beth Israel Deaconess, Cleveland Clinic, Longs, Medco, CVS MinuteClinic, Quest, RxAmerica and Walgreens).

What’s important to consider is that Google needed to have the infrastructure in place (servers, databases) ready to receive all of the data from these systems. That means Google Health is ready to grow as more partnerships are forged with more health systems.

In order to illustrate how hard it is to get other companies to share data with Google Health, and why it’s important to get their staff on board with this new development in medical records maintenance, I want to tell you about my experience linking Quest Diagnostics with Google Health.

Quest is one of the companies listed at Google Health as having the ability to export/share their data with my Google Health account. What’s needed is a PIN, a last name and a date of birth. The latter two are easy. The PIN is the hard part. While the Quest Diagnostics websites has a page dedicated to Google Health, where they describe the various benefits and how to get started, they ask people to contact their doctors in order to obtain a PIN. I tried doing that. My doctor knew nothing about it. Apparently it’s not the same PIN given to me when I had my blood drawn — by the way, that one didn’t work on Quest’s own phone system when I wanted to check my lab results that way…

Quest Diagnostics lists various phone numbers on their site, including a number for the local office where I went to get my bloodwork done, but all of the phone numbers lead to automated phone systems that have no human contact whatsoever. So Quest makes it nearly impossible to get in touch with a human employee and get the PIN. Several days later, in spite of the fact that I’ve written to them using a web form they provided, I still don’t have my PIN and can’t import my Quest Diagnostics lab results into my Google Health account.

Updated 5/27/08: Make sure to read Jack’s comment below, where he explains why things have to work this way with Quest — for now at least.

That is just one example of how maddening it is to try and interact with healthcare organizations, so let me tell you, it’s a real feat that Google managed to get eight of them to sign up for data sharing with Google Health. It’s also a real computer engineering feat to write the code needed to interact with all those various systems. I’m sure Google is working on more data sharing alliances as I write this, so Google Health will soon prove itself even more useful.

More work lies ahead

I do hope that Google is in it for the long run though, because they’ll need to lead data sharing advocacy efforts for the next decade or so in order to truly get the word out to patients, healthcare organizations and providers about the benefits of data sharing and Google Health.

For now, Google Health is a great starting point, with the infrastructure already in place and ready to receive more data. I’m sure that as the system grows, Google will build more reporting and data export capabilities from Google Health to various formats like PDF, as mentioned several paragraphs above, and then the system will really begin to shine. I can’t stress enough what a good thing this is, because just like with web search, it puts our own medical information at our fingertips, and that’s an invaluable benefit for all.

Join me for a short screencast where I show you Google Health. You can download it below.

Download Google Health Screencast

(6 min 28 sec, 720p HD, MOV, 39.8MB)