Reviews

Internet access in Romania still much better than USA

Remember this article of mine, where I shared my thoughts on why broadband speeds are so far behind just about every other country in the USA? Well, the difference has just gotten even greater.

A little under two months ago, I helped my parents in the US upgrade their broadband from AT&T’s unreliable and slow ADSL to Comcast’s digital cable. This means they went from speeds of 2-3 Mbps down and 512 Kbps up (with AT&T) to 15-16 Mbps down and 3-4 Mbps up with Comcast. That was a huge improvement, but it’s still nothing compared to what is available in Romania at the moment.

Here, Birotec (my ISP) has increased their broadband speeds ten-fold this month. That means I just went from 3 Mbps to 30 Mbps. We went by their store today to pay our bill, and while talking with the customer service rep, I found out about the upgrade. She said it quite matter of factly, as if it was no big deal. It’s a huge deal to me! With Birotec, I can get speeds up to 100 Mbps if I want to. And the broadband speed is almost symmetric upstream and downstream, because it’s built on a fiber optic backbone. I went to speedtest.net and tested my speed today. I’m getting even greater download speeds than advertised, which is amazing. I get speeds up to 54 Mbps downstream!

Do you want to know the best part? It still costs me only €10/month for all this blazing speed, and I get a free telephone plan thrown in as well, with my own number. In the US, it costs my parents over $50/month for internet access with Comcast, and if they wanted a phone plan, the price would go up by another $30-40.

Romtelecom, Romania’s largest telephone and internet provider, has also increased their broadband speeds. They’ve begun using a new DSL technology called VDSL, and they’re offering broadband plans at speeds up to 30 Mbps downstream and 6 Mbps upstream. Incidentally, their largest plan (30 Mbps) also costs about €10/month, but you’ve got to keep in mind it’s still DSL and the uplink speeds are slower. Plus, phone service will cost you extra.

I’d love to know which companies can offer the same speeds in the US, and at similar prices. Short of Verizon’s fiber optic network, which is only deployed in limited metro areas, and still costs more, what else is there?

And that begs me to repeat my original question: why are broadband speeds so slow in the US?

Standard
Thoughts

Republicans move to block net neutrality

The latest push to get the net neutrality bill passed met with resistance from Republicans and Comcast, one of the large American ISPs. Apparently they think the market regulates itself. It would, in a perfect world, but not in one where politicians working hand in hand with ISP lobbyists move to block any measures that would encourage real competition and require increases in broadband speeds, which is what the US politicians across both sides of the aisle have been doing for the past decade.

Is it any wonder then that broadband internet still sucks in the US? I say 5 Mbps broadband at $20/month or less ought to be legislated as a minimum, and all ISPs ought to be forced to offer it as one of their monthly subscription options. That would teach them a lesson they deserve.

Standard
Thoughts

It’s no surprise broadband internet sucks in the US

A recent Akamai survey, which I shared here and here, ranked US in the 33rd spot (globally) when it came to broadband internet connections above 2 Mbps. Sure, it moved up two spots compared to last year, but it’s still lagging behind countries such as Monaco, Slovakia, South Korea, and believe it or not, Romania — which is where I’m living these days.

That’s sad. It’s very sad because a country such as Romania, with fewer resources than the US, and with a LOT more corruption at every level, has managed to provide better Internet services than the US. It just goes to show you how much pork barrel legislation and ridiculous lobbying can slow down an entire country’s Internet access. Why, every time a company tried to improve the way broadband worked in the US, it was eventually bought out or dragged down and kept down for the count.

Remember Telocity? It was one of the first companies to offer DSL service in the US, ahead of Ma Bell. Even though it was paying hefty amounts of money for the right to transport Internet traffic on Ma Bell’s lines, they had enormous problems with the same Ma Bell, due to problems that would somehow just happen to crop up on the same wires or the switching equipment. Then they’d have to pay more money so Ma Bell could fix their own equipment, which they’d say Telocity broke, etc., ad nauseam, and so on and so forth.

That’s just one example. Another was the more recent push to restructure the way cable services are provided (both TV and internet). One of the efforts was the a-la-carte programming initiative, and another was the push for faster and more reliable cable Internet services. You wouldn’t believe the advertising, PR and lobbying blitz the cable industry started and kept up for several months — actually, I’m fairly sure you saw their ads on TVs and buses everywhere, particularly in the Washington, DC area.

Or what about when they got together in late 2007 and 2008 to ask for an Internet tax? Remember the tiers of traffic they wanted to create? They wanted all the big websites to pay them for the traffic, as if they weren’t already getting enough money from the customers for their slow and unreliable services. They also wanted large chunks of money from the federal government in order to upgrade their infrastructure. No matter how much money they make, they’re so greedy they always want more, more, more.

What I’d like to know is how all these other countries, including Romania, can manage to offer faster and more reliable Internet services without asking for money from their countries’ government, without charging big websites for their traffic, and also by charging less per month for better broadband? How is that possible? Could it be that these companies actually know how to run their businesses while their counterparts in the US are filled with lazy, greedy idiots?

I still vividly remember an incident which happened while I was a director of IT at a Florida hospital, several years ago. A BellSouth technician had been called in to check the phone boards, and my network and servers kept going down and coming back up. The Medical Records system kept giving errors when employees wanted to access forms to fill in patient data, not to mention that other network services, like file sharing and printing, kept going on the fritz. I checked every one of the servers and they were fine. I finally walked into the switch room, at my wits’ end, only to find the moronic BellSouth employee with his fat, lazy butt on our UPS, jiggling it back and forth as he chatted with someone back at BellSouth HQ, plugging and unplugging the power supply that fed one of the main network switches. I went ballistic, grabbed him by the collar and threw him out of my switch room. Was he that stupid that he didn’t know where he was sitting? Was he such a pig that he couldn’t feel the plugs underneath him as he sat on them? He didn’t even want to apologize for taking out an entire hospital’s network during daytime hours. That’s BellSouth for you.

I don’t know how the US can get better broadband, unless it’s legislated. An ultimatum must be given by the government, one that can’t be overridden by any lobbyists or CEOs shedding crocodile tears in front of Congress. These companies simply will not get their act together until they, too, are grabbed by their collars and shaken about. They’ve gotten used to the status quo, they like it, and they’re clinging to it with all their might.

Meanwhile, here’s a sample of the Internet plans you can get in Romania right now. For comparison purposes, 1 Euro is worth about $1.4 these days.

Romtelecom (the main phone carrier, provides ADSL services):

  • 2 Mbps, 2084 kbps/512 kbps, 4.88 Euro/month
  • 4 Mbps, 4096 kbps/512 kbps, 7.02 Euro/month
  • 6 Mbps, 6144 kbps/512 kbps, 9.40 Euro/month
  • 8 Mbps, 8192 kbps/768 kbps, 14.16 Euro/month
  • 20 Mbps, 20480 kbps/1024 kbps, 24.87 Euro/month

[source]

Birotec (provides fiber optic services, all plans include phone line with varying amount of minutes based on plan price):

  • 3 Mbps up/down, 10 Euro/month
  • 4 Mbps up/down, 15 Euro/month
  • 6 Mbps up/down, 20 Euro/month
  • 8 Mbps up/down, 29 Euro/month
  • 10 Mbps up/down, 49 Euro/month

[source]

RDS (provides fiber optic, cable, cellular modem and dial-up access — prices not readily available on website):

  • Fiber optic access up to 2.5 Gbps
  • Cable access up to 30 Mbps

[source]

The lowest internet access plan in Romania is 2 Mbps. Cellular modems are advertised at speeds up to 3 Mbps. Meanwhile, in the US, you can still find 512 Kbps plans at prices twice or three times as much as the 2 Mbps plans in Romania. That’s the price of complacency and excessive lobbyism.

Standard
Reviews

Google Health is a good thing

When it launched a few weeks ago, Google Health received fairly lackluster reviews. Privacy issues and lack of features were the main complaints. Well, I’m here to tell you those initial views are wrong.

Even if you’re a long-time reader of my site, you may not know what qualifies me to make that statement, so let me tell you a bit about myself.

My background

A few years ago, I was Director of Health Information Systems at a South Florida hospital, where I implemented an electronic medical records system. My job was fairly unique, because I not only wrote the policies and procedures for the system and oversaw its implementation, but I also rolled up my sleeves and built the various screens and forms that made it up. I, along with my staff, also built and maintained the servers and databases that housed it.

As far as my education is concerned, I hold a Master’s Degree in Health Services Administration (basically, hospital administration). I was also admitted to two medical schools. I ended up attending one for almost a year until I realized being a doctor wasn’t for me, and withdrew.

For plenty of years, I’ve been a patient of various doctors and hospitals, as have most, if not all of you, for one reason or another.

Furthermore, my father is a doctor: a psychiatrist. He has a private practice, and also holds a staff job at a hospital. My mother handles his records and files his claims with the insurance companies, using an electronic medical records system. I get to hear plenty of stories about insurance companies, billing ordeals, hospitals and the like.

So you see, I’ve seen what’s involved with medical records and access to said records from pretty much all sides of the equation. Again, I say to you, Google Health is a good thing, and I hope you now find me qualified to make that statement.

The benefit of aggregation

Just why is it such a good thing? Because I wish I could show you your medical records — or rather, their various pieces — but I can’t. That’s because they exist in fragments, on paper and inside computer hard drives, spread around in locked medical records facilities or in your doctors’ offices, all over the place. If you endeavored to assemble your complete medical history, from birth until the present time, I dare say you’d have a very difficult time getting together all of the pieces of paper that make it up — and it might not even be possible. That’s not to mention the cost involved in putting it together.

A few of the problems with healthcare data sharing

Do you know what my doctor’s office charges me per page? 65 cents, plus a 15 cent service fee. For a 32 year old male (that’s me) it would take a lot of pages (provided I could get a hold of all of them) and a lot of money to put my medical record together.

The sad part is that this is MY medical information we’re talking about. It’s information that health services workers obtained from MY body. It’s MY life and MY record, yet I can’t have access to it unless I fill in a special form at every doctor’s office I’ve ever visited, and pay for the privilege. Is that fair? NO. Can something be done about it? YES, and so far, Google Health is the only service I’ve seen that is trying to pull together all of the various pieces that make up my medical record, for my benefit and no one else’s. Sure, the system is in its infancy, and there’s a lot of work to be done to get it up to speed, but that’s not Google’s fault.

I’ve been inside the healthcare system, remember? I know how things work. I know how slowly they work, to put it mildly. I know how much resistance to change is inherent in the system. Just to get medical staff to use an electronic medical records system is still a huge deal. The idea of giving the patient access to the records, even if it involves no effort on the part of the medical staff (but it does, as you’ll see shortly) is yet another big leap.

Let’s also not forget to consider that medical records systems are monsters. Each is built in its own way. There are certain lax standards in place. Certain pieces of information need to be collected on specific forms. The documentation needs to meet certain coding standards as well, or the hospitals or doctors’ offices or pharmacies won’t get reimbursed. There are also certain standards for data sharing between systems, and the newer systems are designed a little better than older ones.

Yet the innards of most medical health systems are ugly, nasty places. If you took the time to look at the tables and field names and views and other such “glamorous” bits inside the databases that store the data, you’d not only find huge variations, but you’d also find that some systems still use archaic, legacy databases that need special software called middleware just so you can take a peek inside them, or form basic data links between them and newer systems. It’s a bewildering patchwork of data, and somehow it all needs to work together to achieve this goal of data sharing.

The government is sort of, kind of, pushing for data sharing. There’s NHIN and the RHIOs. There are people out there who want to see this happen and are working toward it. Unfortunately, they’re bumping up against financial and other barriers every day. Not only are they poorly funded, but most healthcare organizations either do not want or cannot assign more money to either getting good record systems or improving their existing ones to allow data sharing.

Add to this gloriously optimistic mix the lack of educated data management decisions made in various places — you know the kind of decisions that bring in crappy systems that cost lots of money, so now people have to use them just because they were bought — and you have a true mess.

Oh, let’s also not forget HIPAA, the acronym that no one can properly spell out: Health Insurance Portability and Accountability Act. The significant words here are Insurance and Accountability. That’s government-speak for “CYA, health organizations, or else!” There’s not much Portability involved with HIPAA. In most places, HIPAA compliance is reduced to signing a small sticker assigned to a medical records folder, then promptly forgetting that you did so. Your records will still be unavailable to you unless you pay to get them. Portability my foot…

Benefits trump privacy concerns

Alright, so if you haven’t fallen asleep by now, I think you’ve gotten a good overview of what’s out there, and of what’s involved when you want to put together a system like Google Health, whose aim is to pull together all the disparate bits of information that you want to pull together about yourself. Personally, I do not have privacy concerns when it comes to Google Health. There are more interesting things you could find about me by rummaging through my email archives than you could if you went through my health records. If I’m going to trust them with my email, then I have no problems trusting them with my health information, especially if they’re going to help me keep it all together.

Not sure if you’ve used Google Analytics (it’s a stats tool for websites). Not only is it incredibly detailed, but it’s also free, and it makes it incredibly easy to share that information with others — should you want to do it. You simply type in someone’s email address in there, and you grant them reader or admin privileges to your stats accounts. Instantly, they can examine your stats. Should you prefer not to do that, you can quickly export your stats data in PDF or spreadsheet format, so you can attach it to an email or print it out, and share the information that way.

I envision Google Health working the same way. Once you’ve got your information together, you can quickly grant a new doctor access to your record, so they can look at all your medical history or lab results. You’ll be able to easily print out immunization records for your children, or just email them to their school so they can enroll in classes. A system like this is priceless in my opinion, because it’ll make it easy to keep track of one’s health information. Remember, it’s YOUR information, and it should NOT stay locked away in some hospital’s records room somewhere. You should have ready access to it at any time.

Notice I said “whose aim is to pull together all the disparate bits of information you WANT to pull together” a couple of paragraphs above. That’s because you can readily delete any conditions, medications or procedures you’d rather keep completely private from Google Health. Should you import certain things into it that you don’t feel safe storing online, just delete that specific thing, and keep only the information you’d be comfortable sharing with others. It’s easy; try it and see.

Lots of work has already been done

Another concern voiced by others is that there isn’t much to do with Google Health at the moment — there isn’t much functionality, they say. I disagree with this as well. Knowing how hard it is to get health systems talking to each others, and knowing how hard it is to forge the partnerships that allow data sharing to occur, I appreciate the significant efforts that went on behind the scenes at Google Health to bring about the ability to import medical data from the current 8 systems (Beth Israel Deaconess, Cleveland Clinic, Longs, Medco, CVS MinuteClinic, Quest, RxAmerica and Walgreens).

What’s important to consider is that Google needed to have the infrastructure in place (servers, databases) ready to receive all of the data from these systems. That means Google Health is ready to grow as more partnerships are forged with more health systems.

In order to illustrate how hard it is to get other companies to share data with Google Health, and why it’s important to get their staff on board with this new development in medical records maintenance, I want to tell you about my experience linking Quest Diagnostics with Google Health.

Quest is one of the companies listed at Google Health as having the ability to export/share their data with my Google Health account. What’s needed is a PIN, a last name and a date of birth. The latter two are easy. The PIN is the hard part. While the Quest Diagnostics websites has a page dedicated to Google Health, where they describe the various benefits and how to get started, they ask people to contact their doctors in order to obtain a PIN. I tried doing that. My doctor knew nothing about it. Apparently it’s not the same PIN given to me when I had my blood drawn — by the way, that one didn’t work on Quest’s own phone system when I wanted to check my lab results that way…

Quest Diagnostics lists various phone numbers on their site, including a number for the local office where I went to get my bloodwork done, but all of the phone numbers lead to automated phone systems that have no human contact whatsoever. So Quest makes it nearly impossible to get in touch with a human employee and get the PIN. Several days later, in spite of the fact that I’ve written to them using a web form they provided, I still don’t have my PIN and can’t import my Quest Diagnostics lab results into my Google Health account.

Updated 5/27/08: Make sure to read Jack’s comment below, where he explains why things have to work this way with Quest — for now at least.

That is just one example of how maddening it is to try and interact with healthcare organizations, so let me tell you, it’s a real feat that Google managed to get eight of them to sign up for data sharing with Google Health. It’s also a real computer engineering feat to write the code needed to interact with all those various systems. I’m sure Google is working on more data sharing alliances as I write this, so Google Health will soon prove itself even more useful.

More work lies ahead

I do hope that Google is in it for the long run though, because they’ll need to lead data sharing advocacy efforts for the next decade or so in order to truly get the word out to patients, healthcare organizations and providers about the benefits of data sharing and Google Health.

For now, Google Health is a great starting point, with the infrastructure already in place and ready to receive more data. I’m sure that as the system grows, Google will build more reporting and data export capabilities from Google Health to various formats like PDF, as mentioned several paragraphs above, and then the system will really begin to shine. I can’t stress enough what a good thing this is, because just like with web search, it puts our own medical information at our fingertips, and that’s an invaluable benefit for all.

Join me for a short screencast where I show you Google Health. You can download it below.

Download Google Health Screencast

(6 min 28 sec, 720p HD, MOV, 39.8MB)

Standard
Lists

Condensed knowledge for 2008-03-24

Standard
Reviews

Windows Family Safety

Windows Family SafetyWindows Family Safety (WFS) is a new offering from Microsoft that aims to offer protection from questionable or indecent websites to families or individuals. I tried it out for a couple of weeks, and found it to work fairly well, except for a few hiccups here and there.

It is a software-based internet filtering mechanism. The difference between a software-based internet filter and a hardware-based one is that the software needs to be installed on every computer where filtering is desired. A hardware-based internet filter is usually self-contained in a box or appliance that gets placed between the user’s internet connection and the internet. The benefit of such an appliance is readily seen. There’s nothing to install on client computers. Unfortunately, hardware-based solutions have been fairly expensive, historically speaking.

Software-based internet filtering has also cost money, until now. As a matter of fact, Microsoft used to offer one such software-based solution with its premium MSN service. Windows Family Safety may be that same offering, repackaged as a free service.

Having used other software-based internet filters, I can tell you Windows Family Safety is a lot easier to use, and much less annoying than paid products. Those other services, who don’t even deserve to be called by their names, were just plain awful. I had to authenticate every time I tried to access a website, and logins didn’t even take at times. What’s worse, if a single website called out to other websites to display information, as is so common these days, I had to authenticate for every single request. They were a nightmare, and I quickly uninstalled them.

Windows Family Safety requires a simple install, and the selection of a master account which can set the level of access for that computer. It uses Microsoft Passport sign-ons, which means I was able to use my Hotmail account to log in. After that, it was a matter of logging in every time I turned on my computer or came back from standby. This was one area where I encountered a hiccup though. The software had an option to allow me to save my username and password, so I wouldn’t have to enter them so often, but that option didn’t seem to work. I was stuck logging in much more than I cared to do, but still, this was nothing compared to the torture I went through with other software-based filters — as already mentioned in the paragraph above.

Just how does WFS work? It turns out that it uses a proxy to filter the traffic. It means that every time you make a call to a website, that call first goes through the WFS servers, where it gets matched to their content database and the website deemed to be appropriate for the level of safety that you’ve chosen. Here’s where I encountered two hiccups.

The first was that at peak times, the speed of my internet connection was slowed down to a crawl until it could pass through the fairly busy proxy servers and be filtered. That was really annoying, but I assume that’s going to get better as MS dedicates more proxy servers to the service. Perhaps it might be better to download content filters directly to each computer and filter the traffic locally, so the chance of a bottleneck is reduced or eliminated.

The second was the seemingly arbitrary designation of some sites as inappropriate. I chose to filter out adult, gambling and violent websites. Somehow, both of my blogs (ComeAcross and Dignoscentia) didn’t meet that standard, which was very surprising to me. Neither of those sites can even remotely be classified under those questionable categories. Fortunately, there’s a fairly simple process for requesting that a site be reconsidered for proper classification, and it’s built into the Windows Family Safety website. I followed the procedure, and within days, my sites were properly classified. But the fact that I had to go through all of that makes me wonder how they’re classified in the first place.

Overall, I found that WFS still hasn’t gotten proper branding. What I mean by that is that it’s not clearly identified as a product by Microsoft. The Windows Live OneCare Family Safety website is part of the Live Family of sites, true, but it’s not even identified on most of the other sites in that family (Hotmail, SkyDrive, etc.) I also found that configuring one’s WFS account can be pretty unintuitive, as the navigation on the WFS site is cumbersome and lacking focus (much like the Windows Live OneCare site, come to think of it.) I even got code errors when I tried to surf through it recently, which is not what I expected from a public MS site.

On a general note, Microsoft really needs to do some work in associating each MS product with the Windows Live account that uses it, and making it easy for each user to access the online/offline settings for each product. Google does a great job with this, and MS could stand to learn from them here.

Windows Family Safety is a good solution, and it works well considering that it’s free. If you’re looking to set up some easy internet filtering at your home, it could turn out to work great for you. Give it a try and see!

Standard
Thoughts

Photography, take two, part two

I continued to work on replacing photos hosted with third party services. The list of modified posts is provided below. This has proven to be a huge effort. I had to locate the photos in my digital library — not all of which is keyworded yet, though I’ve got location information for all my photos — but I also chose to re-process, keyword and re-title the photos. You see, most of these photos were keyworded through bulk uploaders, for the purpose of displaying that data on third party photo sharing sites, not for my own library. Clearly that effort was wasted, but I didn’t know that back when I did it… Where applicable, I am also re-writing some of the text.

I want to make sure that the content I provide here at ComeAcross is truly top tier, as much as possible. What does that mean? Well, it means I spent my entire weekend, including Monday, working on the posts listed below, and on the posts listed in part one. I still have more posts to go. I don’t mind doing this — actually, I look forward to it — but I do hope that you, the reader, appreciate the effort that goes on behind the scenes. 🙂

Also see Photography, take two, part one.

Standard
Thoughts

Photography, take two

Over this weekend and the last several days, I’ve gone through posts that contain photographs, and replaced all of the images with ones hosted directly at ComeAcross. In the past, I’ve used photos hosted with third party photo sharing services, and I realize now that’s a folly.

If a third party service goes down, which is very likely with beta services, my photos become unavailable. Even if that service is not in beta, a simple action like closing one’s account shuts down access to all of the photos uploaded there. It’s much more practical to host the photos together with my website. That way, I am fully responsible for making sure that all of my content is accessible. If something goes down, I can take care of it. If I need to change web hosting providers, I simply transfer all of my files over to another server.

It’s not as simple to transfer one’s content with photo sharing services, no matter what they may promise. Image and meta data portability is still not 100% there, and it doesn’t help when a photo sharing service advertises their API’s availability for more than a year, yet fails to put it out for public use. It also doesn’t help when said portability is rendered useless by the amount of compression used on the uploaded originals, or the deletion of meta data embedded in the originals…

You see, everyone is ready to promise the world to you when they want to sell you on something. Quite often, that “world” is nothing more than an empty little shell. I speak in general terms here, from the things I’ve learned through my various experiences — mostly recent ones…

At any rate, I’ve still got to modify a number of posts, but I thought I’d point out the ones I’ve already worked on. They’re quite a few, and I’m happy with the results so far. Here they are:

Also see Photography, take two, part two for more updated posts.

Standard
Reviews

Hardware review: WD My Book World Edition II

After looking around for a storage solution to house my growing collection of photographs, I found the Western Digital My Book World Edition II. I’ve been storing my photos on single external hard drives so far, but data loss has always been a concern with that approach. All it takes is a hard drive failure, and I’m going to lose a good portion of my hard work. Naturally, I’ve been looking into various RAID or other failsafe solutions, since they’ve gotten to be fairly affordable.

Great design

I was immediately drawn to the new WD My Book line because of their beautiful design, 1 TB capacity, and the ability to configure the device in RAID 1 format, which would mean my data would be mirrored across the two hard drives inside it. (This would also halve the amount of space available, but that was okay with me — I wanted data redundancy.)

WD My Book World Edition II (front)

For those of you not familiar with WD’s external drives, they have done a beautiful job with their enclosure design, and I raved about their Passport line several months ago. It turns out I now own one of them, a sleek black 160 GB 2.5″ drive just like the one pictured in that post. It’s perfect for data portability, and for a while, I even stored some of my photos on it. But it is just a single drive, and as I said, I’m worried about data loss.

Choosing the product

Back to the My Book line. There were two models I really liked: the My Book Pro and the My Book World. Because I have a mixed OS environment (both PC and Mac), I thought a NAS solution like the My Book World would work best for me, even though its specs said it would only work for Windows. I had a pretty good hunch that I would also be able to access it with my iMac. It runs on Java, it has Samba shares, and those are readily accessible from any Mac. But, this isn’t advertised, and that’s a pity.

By the way, if you’re thinking about getting the My Book Pro drive, make sure to read my review of that model. The takeaway message is to stay away from it, and I explained why in that article.

How it works

The drive itself is beautiful and fairly quiet, except when it boots up. WD has also made firmware upgrades available that make the drives even quieter, so that’s a good thing. I can tell you this right away. If you only plan to use the drive in a Windows environment, it’ll work great. Feel free to buy it, you’ll be happy. But, if you plan to use it in a mixed OS environment, and are looking to access it in more flexible ways, such as with custom drive mappings, and not through the software provided with the drive, you might be very frustrated.

Let me explain. The drive comes with a custom version of something called Mionet. I’ve never heard of it, but it’s software that installs on your machine and makes your files and computer remotely accessible from anywhere. When you run the installer, it’ll prompt you to create an account on the Mionet website, and it’ll register the WD drive, along with your computer, as devices that you can then access remotely. (There’s a monthly fee involved if you want to control your own PC remotely with the software, but you don’t need to pay it to use the WD drive fully.)

Once you install the software, you start up Mionet, and the WD My Book World drive gets mapped automatically to your machine. You also have the option to manage the drive through a browser interface. That’s actually where you configure its volumes (1 TB single volume, or RAID 1, still single volume, but mirrored data and only 500 GB) and other options. Basically, you have to remember that the only proper way to access the drive, whether you’re at home or you’re away, is to start up Mionet and get it mapped to your “My Computer”. If you do that, you’re good to go.

WD My Book World Edition II (back)

Potential problems

The problem with this approach (and this tends to be a problem only for geeks like me) is that the drive is readily accessible over the network without Mionet. I can simply browse my workgroup and find it, then log in with separate accounts I can set up by using the WD drive manager, which is accessible through my browser. So here’s where the frustrating part comes in. I can browse to my drive over the network, without Mionet, from any PC or Mac in my home, administer its options, add users and shares, etc. Then I can use Tools >> Map Drive on my PC or Command + K on my Mac to connect to the share name, and log in using those user accounts I’ve just set up. But, I can only read from those shares. I can’t write to them. The drive operating system assigns weird UNIX privileges to those shares, and they don’t correspond to the accounts I’ve just set up. It makes no sense to me and you’ll only fully know what I mean if you do this yourself. Suffice it to say that it’s really frustrating, and it’s not what I expected.

It would have been alright if Mionet made a version of their software for the Mac, but they don’t, and they don’t seem to have any plans to make any. It would have still been alright if the drive hadn’t been accessible through any Mac whatsoever. But the fact that they are accessible, and that I can log onto the drive with usernames and passwords that I can set up through the admin interface, yet I can only gain read-only access to those shares even though I’m supposed to have full access really gets me. Sometimes it’s a real pain to be a geek…

So, my verdict is that I really like the design and the RAID 1 capability, but I do not like the implementation. I ended up returning this and getting the My Book Pro Edition, which I love, and will review very soon. But remember, if you don’t have a mixed OS environment, and have no problems with starting up Mionet when you want the drive to appear in “My Computer”, My Book World will work great for you, and the remote access capability is a really nice feature.

Updates

Updated 7/19/07: I purchased and reviewed the My Book Pro as well. You can read my review right here.

Updated 8/3/07: Multiple commenters have pointed out (see this, this, this, this, this and this) that you can use the drive just fine with both Macs and PCs, over the network, if you skip the install of the Mionet software altogether. It looks like the clincher is the Mionet install itself. Just forgo it, and you’ll be able to map the drive to both PCs and Macs, and read/write as much as you want. I didn’t realize that I had to uninstall Mionet entirely in order for the read/write to work properly.

But keep in mind, if you don’t use the Mionet software, you won’t be able to access the drive remotely. Well, you might be able to arrange some access, but you’ll need to custom-configure your firewall settings to allow traffic on certain ports, and you’ll need a static external IP or dynamic DNS so you can get at your firewall from the outside. And then you’ll need to worry about data encryption as well, unless you don’t care that your data will travel unencrypted over open networks. If you’re a hardcore geek, feel free to try this last bit out, but if you aren’t, beware, it’s a weekend project, and I can’t help you.

Updated 8/9/07: I’ve had several people comment on how they bought the drive based on this post and the comments made on it by others, believing they could get it working over the network with their Mac. The kicker is that they thought they could connect it directly to their machine and get it working that way. 😐 I don’t know how they got that idea, but let me set the record straight. This is a NETWORK drive. It needs a network in order to work. There’s a chance you might get it working by using a crossed ethernet cable or connecting it directly to your machine, but it probably has to be a crossed ethernet cable.

The best way to get it working is to use a hub or a switch, or best of all, your home router, which can assign IP addresses. The drive ships configured for DHCP. That means it has no IP address to start with, and it’s looking for a place to get them. If you don’t have such a place, you’re going to have a lot of headaches. Get such a place (router) or go buy a USB/Firewire drive. Most people who’ve commented already made it plainly clear that’s what they needed, but they still insisted on using this drive. I don’t know why they enjoy the stress of doing that. I didn’t. As I already said in my post, I returned it and got a WD My Book Pro Edition II.

Last but not least, please do me a big favor. Read through the existing comments before you write one. There are so many already, and there’s a very good chance someone’s already asked your question, and I or someone else has already answered it. Thanks!

Updated 12/11/07: I found out today that Western Digital is going to disallow the sharing of all media files through the Mionet software. In other words, if you’re going to use Mionet to share the files on your drive and make them accessible remotely, you will not be able to see or use any of your media files. I think this is a pretty stupid move on WD’s part, and it’s going to come back to bite them. Until they decide to do away with this boneheaded downgrade, keep it in mind if you’re looking to purchase a My Book World Edition. Do NOT use Mionet. Install the drive without it, and if you’ve got to make the files accessible remotely, find other ways to do it, like through a custom config of your firewall.

Updated 12/18/07: Christian, one of the commenters, has left two very useful comments that are worth mentioning here in the post. The first shows you how to access the drive remotely (when you’re away from home) without using the Mionet software. The second tells you why you don’t need to worry about defragging the drive, and how to troubleshoot its performance if you think it’s not as fast as it should be. Thanks Christian!

Updated 4/5/10: Andrew Bindon has posted an easy-to-follow tutorial on how to remove Mionet completely from your computer and the My Book World Edition drive. If you, like me and many others, think Mionet is an annoyance that would best be removed, then follow his advice.

More information

Standard
Reviews

Flickr tightens up image security

Given my concern with image theft, I do not like to hear about Flickr hacks. A while back, a Flickr hack circulated around that allowed people to view an image’s full size even if the photographer didn’t allow it (provided the image was uploaded at high resolution.) The hack was based on Flickr’s standard URL structure for both pages and image file names, and allowed people to get at the original sizes in two ways. It was so easy to use, and the security hole was so big, that I was shocked Flickr didn’t take care of it as soon as the hack started to make the rounds.

It’s been a few months now, and I’m glad to say the hack no longer works. I’m not sure exactly when they fixed it. Since it’s no longer functional, I might as well tell you how it worked, and how they fixed it.

D

First, let’s look at a page’s URL structure. Take this photo of mine (reproduced above). The URL for the Medium size (the same size that gets displayed on the photo page) is:

http://flickr.com/photo_zoom.gne?id=511744735&size=m

Notice the last URL parameter: size=m. The URL for the Original size is the same, except for that last parameter, which changes to size=o. That makes the URL for the original photo size:

http://flickr.com/photo_zoom.gne?id=511744735&size=o

Thankfully, that no longer works. If the photographer disallows the availability of sizes larger than Medium (500px wide), then you get an error that says something like “This page is private…”

Second, they’ve randomized the actual file names. So although that image of mine is number 511744735, and it stands to reason that I would be able to access the file by typing in something like http://farm1.static.flickr.com/231/511744735_o.jpg, that’s just not the case. Each file name is made up of that sequential number, plus a random component made up of letters and numbers, plus the size indicator. So the actual path to the medium size of the image file is:

http://farm1.static.flickr.com/231/511744735_b873d33b12_m.jpg

This may lead you to think that if you can get that random component from the URLs of the smaller sizes, you can then apply the same URL structure to get at the larger size, but this is also not the case. It turns out that Flickr randomizes that middle part again for the original size. So although it stays the same for all sizes up to 1024×768, it’s different for the original. For example, the URL for the original size of that same photo is:

http://farm1.static.flickr.com/231/511744735_d3eb0edf2d_o.jpg

This means that even if you go to the trouble of getting the file name for one of the smaller sizes, you cannot guess the file name of the original photo, and this is great news for photographers worried about image theft.

While I’m writing about this, let me not forget about spaceball.gif, the transparent GIF file that gets placed over an image to discourage downloads. It can be circumvented by going to View >> Source and looking at the code to find the URL for the medium-size image file. It’s painful, but it can be done, and I understand there are some scripts that do it automatically. The cool thing is that after Flickr randomized the file names, it became next to impossible to guess the URL for a file’s original size. The best image size that someone can get is 1024×768, which might be enough for a 4×6 print, and can probably be blown up with special apps to a larger size, but still, it’s not the original.

Perhaps it would be even better to randomize the file name for the large size as well, so that it’s different from the smaller sizes and the original size. That would definitely take care of the problem. Still, this is a big step in the right direction.

Standard