Wednesday, December 16, 2009

I oppose the mandatory internet filter proposed in Australia

It will come as no surprise to anyone that I oppose the idiotic mandatory internet filter that is being proposed by the Australian federal government at the moment.  I took the time today to write to my local member, Michael Danby, to oppose the policy.  I suggest that anyone who agrees with me that the filter is stupid, which should be anyone that understands the concepts of the internet, does the following things:


  1. Sign the petition against the policy at GetUp
  2. Write a letter to your own Member of Parliament complaining about the policy.  Most if not all members of parliament will have web sites with feedback forms.  Who knows if they actually read them, but it will only take 10 minutes of your time.  Besides, you'll feel better after doing it.
Below is the content of the message I wrote to Mr Danby

Good afternoon Mr Danby, I am a voter registered in your electorate, and would like to speak with you regarding the proposed mandatory internet filter announced by Stephen Conroy and in the media eye at the moment. I am opposed to any such measures, not because I oppose censorship (which I do), but because the scheme will inflict significant penalties on normal people accessing the internet and it will simply not work.
When the trial was introduced, a 16 year old boy managed to circumvent the measures put in place within 30 minutes (ref http://www.zdnet.com.au/news/security/soa/Teen-cracks-AU-84-million-porn-filter-in-30-minutes/0,130061744,339281500,00.htm). Any user capable of using google will be able to bypass the filter by using a foreign proxy (of which there are many (e.g. http://www.hidemyass.com/)) or an encrypting router (such as The Onion Router (ref http://www.torproject.org/)). This indicates that anyone who wants to get around the filter can do so trivially. Children growing up right now already have the skills to circumvent these filters. Given that this is the case, all the filter will do is affect people that are viewing any other sort of content.
As another has put much more eloquently than I, the use of the filter also directly contradicts the interests of the NBN (ref http://newmatilda.com/2009/12/16/conroys-clean-feed-wont-block). The filter has not been trialed at any speeds above 8 megabits/sec, 1/12 the proposed bandwidth of the NBN. The federal government is sending out a mixed message of both trying to usher in a new era in connectivity in this country while attempting to hold it back with ineffectual and intrusive measures.  
I have worked in the I.T. industry for 16 years, and this is the single most stupid technology policy that I have seen in my working career. I implore you to oppose this policy within your party and within parliament. I would welcome the opportunity to speak directly with you on the topic. I can be reached by email on bruce@brucecooper.net or by phone on xxxxxxxxx  
Thank you for your time,

Bruce.
Its not poetry, but I hope it gets the point across.

Wednesday, December 9, 2009

I want to invest in Social Business for christmas

My extended family is beginning to ask the standard christmas question: What do you want for a present?  I'm having difficulty thinking of a gaudy trinket to ask for, so I've decided to ask for a donation to a worthy cause.

We saw Muhammad Yunus On Andrew Denton's Elders show the other night, and I was very impressed with his vision for improving the lot of the world's poor.  He makes it seem like solving poverty isn't hard, which makes a great contrast to the things that some people say.  I especially like the concept of a Social Business, whose purpose is to serve the people in a locally logical way rather than turn a profit.  Interestingly, one would invest in a social business with the intention of getting the investment back at some point, although not necessarily with a profit.

So now I'm all fired up, and I'm going to ask people to invest in Social Business instead of buying me presents.  The only problem is I don't know how to let people do that.  I'm going to have to do some research to find out how it can be done.

Tuesday, November 24, 2009

TI introduces a customisable watch which does HRM out of the box

Engadget have just reported that TI have released a hackable watch, which can do all sorts of things including HRM straight out of the box.  This is really interesting.  I wonder if I can make it work with Bleep.  Of course I should probably concentrate of finishing Bleep first. At $49 I reckon its a steal!  I might ask Santa for one for Christmas

Monday, November 16, 2009

Google Wave for EJA Enterprise Futures Forum 09

A lot of people are saying that using Google Wave to discuss conferences live is the new hotness. Given that there will be a special google wave announcement at tomorrow's Enterprise Futures Forum in Melbourne, I'm willing to guess there will be waves for the conference.

In anticipation, I have created a wave for the tech discussion session I am running on local implications for cloud computing. There's not much there at the moment. I'm hoping to bulk it out a bit this afternoon and tonight in prep for tomorrow's conference.

Saturday, November 14, 2009

Abbe May at the Wesley Anne



Melissa, Courtenay and I went to see Abbe May perform a blues/rock solo gig at the Weseley Anne last night. We always try and see her when she's in town, and as usual she delivered. The style was a bit different this time, as she was on her own and had to adapt some of her songs to fit the format. Abbe also performed a bunch of covers, as she explained afterwards to "keep it fun for me", and you could tell from the performance. Especially good were the two covers originally by Willie Dixon, whom Abbe cites as a major influence. 'Twas great

She's playing the Edinburgh Castle (in Melbourne) next Friday, and she's back at the Wesley Anne on the 27th.  Go see her.  Its a bargain.

Thursday, November 12, 2009

EJA Futures Forum, Nov 17th

Enterprise Java Australia are holding a conference on the 17th of November in Melbourne, with keynote speeches on the Broadband initiative, Green IT, SOA, and Google Wave.  I will be facilitating one of the afternoon tech sessions on Cloud Computing.  If you're at the event, come and say hi to me.


There's a 2 for 1 registration offer open until mid day on Friday.

Friday, November 6, 2009

Melissa's name is on the wall



Its not nearly as bad as all that.  In fact its a good thing.  Melissa's first solo show outside of the university system opened last night at Metalab in Sydney, and was very successful.  She sold some pieces, chatted with lots of people, and we had some fun.  The proprietors of Metalab are very welcoming and friendly and we all went out for Vietnamese food afterwards which I thought was a nice touch.

Apparently numbers for the opening were a little down on usual, but that is because both artists that were opening that night are from out of state, so the locals weren't brought in by the artists themselves.  Still, there were lots of people there even if they weren't spilling into the street so I thought it went very very well.

If you are in Sydney and you'd like to check it out, the exhibition is on till the 26th of November 2009 at 10B Fitzroy Place, Surry Hills, NSW 2010


View Larger Map

Thursday, November 5, 2009

What should I do when google wave topics become too popular/cluttered?

The other day, I released a mind mapping gadget for google wave, and its proven to be quite popular. Popular for something I knocked together quickly anyway. There's an active wave discussing features, which also serves as the main description of the gadget. Its getting a bit long now, and I'm aware that there is a limit to how big waves can get before they start to slow down. It also gets to the point where I want to simplify things so that a new reader coming upon the wave doesn't get confused by the threads of conversation there.

The way that I am using wave at the moment is that there is a single shared document at the top (the root blip) which contains the topic of discussion, and in this case, a mind map of features and votes for features. The blips that come afterwards are a discussion list, much in the same way that comments can be added to blog posts. Whilst they form an important part of the wave, the value of information they contain decreases as they become less topical. It is really the latest comments and blips that are the important bit, at least to people that are returning to long running conversations.

Other systems show the most recent comments on the top, and show older comments on separate pages to stop the page from getting to big. I'm tempted to suggest that wave should do the same thing, but I wonder if its because we're all still figuring out the best way to use wave. Are there different usage patterns for waves that means it makes sense to have every single blip on the screen, even if it means the page is three miles long? I'm still mulling that one, but in the mean time I think there is a need to come up with a way of managing long running waves. Here's my thoughts on how it could be done.

Option 1: Create a new wave to "Continue Discussion". This is what most people seem to be doing at the moment, but it means that anybody who has linked to the page is now linking to (or embeding) a dead version of it, and would then need to click through to see the new version. It also breaks the fundamental principle of URLs in that a URL represents an object for its lifetime.

Option 2: Delete the old crufty posts. That's not very nice to the people that wrote those posts in the first place. Besides, those comments provide useful context for a new reader to be able to catch up with the rest of the people on the wave. In the end, you are removing information from the system, rather than presenting it in an accessible way, and thats never a good idea.

Option 3: Have an archival bot participant on the wave. This bot would monitor the wave, and when the number of blips starts to get high, it would progressively copy the older blips into an archive wave and subsequently delete them from the original. It would also add a link to the end of the root blip showing people where the archive wave is.

I've had a quick look to see if anyone has done this yet, and I haven't found anything, but I think its a great idea. The only technical issue with this approach that I see at the moment is that the archived blips would not have the same author(s) as the original blips, as it would be the bot that authored them. The bot could add some text indicating who the authors were, but its not quite the same.

I really like the idea of the archival bot. Once I get back from Sydney I might give it a go.

Monday, November 2, 2009

Mind Map Gadget for Google Wave

As you can probably tell by my recent posts, I've been mucking about with Google Wave for the last week or so.  It shows a lot of promise, but we still need to work out the best way to use it.

Some colleagues and I were discussing some practice development the other day.  One of them said that they had created a mind map on mind42.com and had shared it with us so that we could map out some ideas.  Mind42 is a great tool and normally I would jump straight on it, but it seemed unnatural to leave the context which we had created in the wave.  It would have been much cooler if we could have had the mind map directly in the wave.

Google has thought of this, and have included the ability to incorporate gadgets into your wave, which allow essentially any web application to participate in waves.  A Mind map is a natural tool to include in waves as it forms the start of a lot of collaborations, which is exactly what wave is for.  Rather than wait for mind42 to change their application so that it could be embedded in google wave, I decided to write my own.  Mind maps are relatively straight forward applications, and I wanted an excuse to use GWT in anger.

The result is my newly released google wave component.  I've uploaded it to the Google Wave samples gallery, but if you have access to the wave preview then you can go directly to the source (With a sample) at this wave link.  Install, have a play, and make suggestions for improvements.  I'm hoping we'll start using it within our organisation as well.

Here's a video of it in action:

Wednesday, October 21, 2009

Does Google Wave herald the arrival of natural language interaction with computers?

I've been spending some time recently thinking about Google Wave, and how it can be useful as a method of communicating and working with multiple participants at the same time, which is what Wave is for, but with a robot as one of those participants.  Wave provides an easy way to incorporate a computer participant in a conversation with people, getting it to receive all updates to the wave and provide its own input.  By doing this it makes performing workflow like tasks much easier, and brings the computer system into the conversation with the humans, rather than having the humans interact with a computer system.

In my example, everybody has a "manager" to which their leave application must be sent before it can be approved.  The bot would detect when you were finished with your application and automatically add in your manager for approval.  This works great, but it needs to know who your manager is.  This information is stored in the HR system, which has its own user interface and logins and whatnot.  If you change manager, somebody with admin privileges on the HR system would need to go in and change your record in that UI.  This breaks the paradigm of bringing the computer into the conversation with humans.

For my demo, I was planning on writing a little user interface for my bot.  It was going to be a web application which allowed any user to go in and play with their manager, so that they could try out the different permutations of the workflow.  Then I got thinking: Why should somebody need to leave the conversation in wave to make changes to the leave system?  Why should they have to interact with yet another user interface?  I already had put in some basic capability for it to tell you your leave balance if you asked it, so why not extend that even further?

Imagine the following scenario:
Bob is having a discussion with Jane about planning for the next release of their product on google wave.  They have a query about upcoming leave for the staff because they might need to cancel leave in order to meed deadlines.  In order to do this, they have to get a list of users on the project, log onto the HR system, and perform queries on each of those users.  They went to the HR systems world, rather than bringing the HR system into their conversation.
Wouldn't it be cool if Bob could talk to the HR system instead?  he could add the HR bot to the wave, and say something like:
Hey HRBot, what is the upcoming leave for the following users:
  • Mary 
  • John
  • Simon
The bot could then parse the question, send the request to the HR system web services, and provide a response directly in the wave so that both Bob and Jane can see the results.  Any leave requests would be listed as links to the request waves that submitted those requests, so Bob could quickly check to see how important that leave is or ask the users if it is okay to cancel that leave.

People have been trying to get natural language systems going for decades, and its still really really hard.  When I was a postgraduate student, I entered the Loebner Prize competition with a colleague of mine, Jason Hutchens.  The purpose of the Loebner Prize is to encourage people to write a bot that is capable of fooling humans that they are talking to a person instead of a computer.  This is impossibly difficult of course, and pretty silly really as Marvin Minsky has pointed out, but its a good bit of fun to write a chatterbot.  The bot that comes closest to being human wins an annual prize of $2000.  Natural language processing was Jason's area of research and he had won the prize before so we decided we'd have a crack.

We lost (curse you Ellen Degeneres!), but sold the software to a german company that wanted to put chatterbot robots on company's web sites as level-0 helpdesk for a sum considerably more than we would have won as prizemoney.  In the end, they had difficulty in selling the software and I don't think it ever really went anywhere.   The range of language that the bot was expected to deal with was very broad which makes parsing it exponentially more difficult, plus websites those days weren't really sophisticated enough to support that level of interaction.

These days, I think there is more of a case for using natural language parsing bots within narrow contexts (to keep the complexity down).  Salesforce have an interesting demo which shows the sort of thing that I am talking about within Wave.  They talk about customer interaction, but I think it will be more useful within an organisation.

In the future, I think we'll see more and more of these bots participating in our business practices.  They can enrich the information we type (for example, by putting in contextual links into what we type, bringing up supporting reports and figures when we are performing tasks, that sort of thing), plus they can become active participants in the procedures while still giving us control over what is going on.



P.S. I feel terrible that I have used cancelling leave as an example here.  I would never advocate cancelling leave as a valid project management technique.  Its a terrible thing to do to your staff, and you should never do it.  Its only used as an example.

Tuesday, October 20, 2009

Who plays the part of transformation in mashups?

In the last week, two people have independently told me about an Australian government sponsored conference to create interesting mashup applications from government data.  I love the idea, and I'm really glad that the government believes that its data should be freely available.  I think most app providers are realising the power of providing open access to their data to drive adoption now.  In my opinion however, independent transformation of data between web applications is still missing as a generic tool to mashup creators.

Generally, in enterprise as in mash-up applications, the source data is not in the correct format to be directly consumed by the final application.  As an example, I am writing an iPhone application which takes heart rate monitor recordings of your exercise and stores them as a Google spreadsheet.  The reason I chose to store the information in an online spreadsheet instead of a bespoke database service is that google already provide all of the tools to make the available in the spreadsheet easily available as XML for others to consume.  It does this using the Atom protocol, which is great, but hardly easy to consume.

Traditionally, a mash-up is seen as the combination of data from one or more external sources with a javascript driven user interface.  The data flow looks something like this.



This is great, however it induces a great amount of coupling between the components.  The mashup provider needs to communicate directly with the data sources, transform them into its own native format, then consume it.  There's no opportunity to substitute in a different datasource if it becomes available, or easily fix things if the source data format changes slightly.  In enterprise applications, this has long been recognised as an issue and ESBs were developed as a way of handling this.  When an ESB is used correctly, the source data (or application) is abstracted from the destination by a transformation process, usually performed by XSLT.

I think that the same approach should be used for mash-up style applications.  The big advantage this brings is that it releases the data from the application (and the user interface).  More importantly, it allows the application itself to fetch data from different sources.  It is no longer limited to the sources that the programmer's put in.  A sufficiently talented user can take any datasource, transform it into the correct format that the application expects, and then get the application to point at that transformed source.



For this to work, there is a need for a generic XSLT service, that can take a data feed and an XSL style sheet and produce the desired output.  W3C provide a service which does exactly this.  Unfortunately the bandwidth requirements of any large enterprise use would crush their server so they have put limitations on the service to restrict it to personal use.  This is a shame.  I've written a very similar service for bleep, but it is run as a free Google App Engine app which has quite severe resource limitations of its own.   I reckon Google should release a transformation service of its own.  It would be very useful in many of its apps.  There's no way to make advertising revenue off it though :-/

Its not really within the average person's skill set to write a transform.  Many software engineers do not know how to do it properly.  In the future, I'd like to hope to think a web app will be written which brings it more within the reach of normal internet consumers.

To bring this back to the govhack conference I mentioned at the beginning of this post, I think that its good that the government wants people to make mashups, but in some ways they are a little misguided.  Its not just about the applications.  Most people will just put some data up on a google map, which is hardly innovative.  Instead, what I would like to see people taking the source data and transforming it and correlating it against other data sources to produce new data sets.  Then its possible for any number of people to take that data and visualise it in cool and impressive ways.

For Bleep, some of my random thoughts on data transformation and visualisation have been gathered at this page

Monday, October 19, 2009

Using Google Wave for Workflow tasks


I've been thinking over the last few days about what Google Wave could be used for. Obviously it can be used as a document collaboration and review platform. It can also be used as a multi-user chat program, although there are probably other existing programs that are just as good for managing that. Some people have claimed that it isn't anything revolutionary. In and of itself, this is true, as it just takes concepts already available in email, instant messaging and collaborative documents and puts them together. What I'm more interested in now however is what we can do with it now that it has brought those technologies together.

In a blog post, Daniel tenner quite succinctly outlines where Google Wave will be useful, and primarily its going to be used by people working together. Why couldn't we use it to do workflow related stuff too, especially when there is an automated component to it? I decided to look at how we could do leave forms. The first question I asked was "why would you want to do leave forms in wave?". There are already web applications for doing workflow. The problem that I see with that is that they are still fairly rigid in their operation. Whenver your manager is on leave or you want to go slightly outside the pre-defined workflow thats there the procedure becomes very brittle and you need to go to a user with superuser access to get things done. In addition to this, all notifications that you have new forms to attend to or that the status has changed go out by email. Email shouldn't be a notification system. We just use it that way because its familiar, and it is the tool that we spend most of our time in.

Assuming that google wave becomes popular enough that we use it each day, wouldn't it be nice if the workflow/collaboration tool entered into our messaging tool. That way we wouldn't need to log into a separate tool to manage things. It would be there in front of us, and allow us to deal with the situation immedaitely.

To test this theory, and also to play with Google Wave bots, I have written an extension to Google wave, which gives users the option to create a Leave application in google wave. Interestingly enough, I found it easiest to document the procedure directly in Wave, which would make it very easy to introduce new users to our procedures. The procedure wave is available at Leave Application Procedure. One nice thing about this is that if HR needs to change the procedure, It automatically pops back up in people's inbox so that they see the changes. There's no need for notifications to be sent out, as the change to the document (to which everyone is subscribed) automatically gets distributed. Likewise, if a staff member has a question on the procedure, it can be asked directly in the wave itself (privately if necessary) to provide context to the conversation.

When a user creates a new leave form wave, it automatically includes a bot which is responsible for progressing the wave through the workflow. This is done by a series of buttons (or actions) at the bottom of the document which take the standard approval route (Draft -> Submitted -> Approved -> Processed). It is flexible however, as anyone that needs to deviate from the process, say to get additional approval from another team leader because he is to another team, can simply add the other team leader to the wave and have a chat. All of the context associated with the process are kept with the wave, and it can easily be searched later on to see what happens.

Its also possible for the bot to take a greater role in the process itself. It can check leave balances (assuming that the leave system is available to it), add leave to the company leave calendar, and any number of other integration tasks because it is the thing that is managing the workflow itself. Its very flexible, easy to change, and completely under the control of the organisation. One thing I played with was getting the bot to understand (as best as bots can) natural language. If you wanted to query your leave balance for example you could start a wave with the bot and ask it "What is my leave balance". It would could then look up your balance and reply. This would be able to free up HR staff from having to perform mundane tasks. Obviously bots have got a long way to go before they can understand our language properly, but if queries conform to simple grammar rules then it should work.

If anyone would like to have a play with what I have written, let me (brucejcooper@googlewave.com) know and I will add you to the procedure wave, which will allow you to install the extension and create a dummy leave application. It puts me in as the approver for everyone (except for me, for whom it uses another bloke) as their manager. Only the manager can approve/reject a timesheet. If you say anything to the bot, it replies with a message about your leave balance (which is bogus. the bot isn't connected to our HR system).

What does everyone think? Is this a good idea for managing workflow? Until we have broader access for people to get on Wave it will be difficult to tell, but I think it is a good use.

P.S. I wrote this post in Wave originally, but had to copy and paste it here because the embedding API doesn't allow anonymous viewing yet.  Google Wave is great, but it is still definitely a beta product.

UPDATE: I've created a screencast demonstration of how the flow will work.  Remember this is intended as a proof of concept rather than the full thing, but it gets the point across.


Saturday, October 17, 2009

Interesting article on app pricing

Gizmodo have an interesting article on the price of iPhone applications, how they have dragged the consumer's expectation of app pricing down, and how this might not be a good idea in the future.

I can certainly say that I won't be expecting to make much money out of Bleep. Its taken longer than expected to develop, and I don't think I'm going to sell lots of copies. How anyone can make a business out of developing these things is beyond me... I suppose we'll see :)

Another Heart Rate monitor device/app for the iPhone

There's another company that is producing a heart rate monitor device for the iPhone. Its outlined at fastcompany.com, and looks great. Its exactly the sort of thing that would render Bleep irrelevant. Sadly, its not going to be made into a product at this stage :(

Friday, October 16, 2009

New blog format for the crimson cactus

When I started Crimson Cactus, I started up a blog for it. It seemed to make sense, and I could keep the company posts separate from holiday pictures and musings and whatnot that way. As its turned out, I find that I want to cross-post. That is, I want to be able to post to both of my blogs.

There are a couple of ways of doing this, but I think it just goes to show that I'm doing it wrong. What I really want is to continue posting to my personal blog, but tag those posts that are of interest to the crimson cactus, and then have my web page pick up that tag. This is exactly what I've done now, so now all of the old posts have been imported into my personal blog, and thats what you'll be seeing from now on.

For those of you that follow via RSS/Atom, The new link to use is http://feeds2.feedburner.com/brucecooper/Pqee

Fitbit review on engadget

Engadget have released a review of the fitbit networked pedometer. I remember first seeing this about 12 months ago, when it had just launched at techcrunch 50. I like the idea of the device, but it is yet another thing to carry around, and yet another thing to charge.

It makes me thing of the belt valet computers in David Marusek's Counting Heads (Thanks for the reccomendation @doctorow). One day, not too far away, we will have computers that we carry, strapped to our person (perhaps in belt form), that can handle all of the biometric sensing that we want. It will be able to count our steps, work out our heartbeat, and record everything that we hear and see for future reference.

I'm really excited about the prospect, even if it does open up a lot of privacy concerns. It will be important that we as individuals retain control of the information that is being collected. I'm pretty sure that google having access to all of my heartbeat (which is basically an EKG) information would be a bad idea. David's sequel book, Mind over Ship shows us a very good example of the mis-use of complete information on people.

Sunday, October 4, 2009

Costumes from fancy dress party



Melissa and I went to a Royalty themed fancy dress party last night, so we went as Louis XVI and Marie Antoinette. Here's a couple of photos.

Tuesday, September 29, 2009

Gizmodo AU running a blog theme of fitness for geeks this week

Gizmodo are running a theme of playing with balls this week, which fits right up the alley for Bleep. In the linked article, they mention heart rate monitor gadgets in particular. At least my approach will be relatively inexpensive. What a pity that Bleep isn't ready for the publication yet. I'll be following their posts with interest.

Monday, September 28, 2009

Ahh, there _are_ heart rate monitor accessories for the iPhone already

I was operating under the impression that nobody had created a heart rate monitor system for the iPhone yet. This seemed illogical to me as it is such an obvious thing to do.

As it turns out, there is one. I found it today at Smheartlink's site. It looks like a great product, but it is a lot of money to spend, especially after you have already purchased a HRM belt.

Its a bit disappointing to see this considering my app will do much the same stuff, but I still see a niche for my app, as it doesn't require you to charge and carry another device around with you, plus it will be a bit cheaper :)

Wednesday, September 23, 2009

Apple approves how many apps a day?!?

I saw an article on gizmodo a few minutes ago that says that apple approved almost 1400 iphone apps last friday. Even on the slow days they approve hundreds of applications.

Its massively impressive that Apple have got so many applications. It just goes to show how much of a runaway success they have on their hands. I can't help wonder though how hard it will be for anyone to find the app that they want when there are so many apps to choose from. I suppose thats why we've got app review sites popping up now, like the App store equivalent of gizmodo... Here's hoping I can get them to review my app when its finished.

In Bleep development news, I got it running on my iPhone 3G last night (rather than just the simulator) and it works okay. There are some performance problems which I think I can rectify fairly easily, and then there is just user interface tweaking to go.

Monday, September 21, 2009

Progress on Bleep

I had hoped to finish Bleep to the point where it could be submitted to Apple over the weekend.  Sadly, I've run into some problems causing the application to crash on the iPhone, even though it runs fine in the simulator.  I'm also trying to polish the user interface to make it a better experience for users.

In the mean time, I've uploaded a sneak peek video of Bleep in action.  Bear in mind that this is an early version of the software, and it is still being tweked. Have a look:

Sunday, August 23, 2009

Just got back from a week of Snowboarding

I just got back from 5 days in the Snow at Mt Hotham in Victoria. Despite the weather forecasts looking like complete arse all week, we got to ski 4 days out of 5 which is not a bad for Australia. Only friday let us down with bucketing rain and howling wind. Given we were due to head back to Melbourne on Friday anyway, we packed it in and headed back early.

One of these days, I'm going to go to a country with real snow and find out what Snowboarding is really about :)

Why is requesting resources from other hosts such a big problem in Javascript?

Recently, I found out that Yarra Trams has published a web service interface for finding out information on tram arrival times. This is awesome, and I've been trying to think of cool little apps that I can write to take advantage of it. As I'm also playing with GWT at the moment, so perhaps I could do something that way... There is a problem however, in that the Same Origin Policy will block any attempts by my javascript code to access the webservice, as it will come from a different origin.

There's a couple of things that I could do about this:
  1. Ask Yarra Trams if they could host my work, but that kind of defeats the purposes of a mashup, doesn't it. Plus, to be honest, the chances of me ever coming up with something that they would want to host are minimal :)
  2. Write a proxy service that I host myself, and send any requests through it. This kind of seems pointless, as there is already a perfectly good webservice out there that I can use. Besides, if what I wrote ended up becoming popular, I'd be up for a bandwidth bill for all the traffic to the web service.
  3. Try and convince YarraTrams to use a cross site friendly interface, such as is provided by google data. Fat chance of that happening. Besides, there's nothing wrong with web services. This callback based approach seems like a hack, probably because it is.
  4. Use a new HTML5 extension called CORS which allows for fetching resources from other origins.
The first three options all are really really nasty. The fourth solution seems like it is perfect. It is specifically designed to allow for fetching resources from differing sites. It requires the destination server to include a header however that specifies what external domains are allowed to access it however, so it still won't work for me :(

I've been thinking about the SOP restrictions and how CORS works, and I must admit I'm confused. The restrictions that it places on requests just seem too over the top. I understand that it is important to stop XSS attacks, but there should be a way for the javascript programmer to explicitly state that (s)he understands the risks and he promises to treat the requested data as potentially dangerous so that he can use it. To require any form of server side change means that almost all mashup style applications can't operate correctly.

To come back to my example, if I want to perform a POST operation to a Web Service, which returns XML, which I then parse and use as data to, say, plot on a map, I can't see any security problem with allowing me to make a XMLHttpRequestObject request to go do exactly that. If I were to eval() the result, I could potentially open up my application to XSS attacks, but I'm not that stupid.

Now perhaps I haven't spent enough time thinking about this, but I don't see why we can't have an override flag on the XMLHttpRequestObject to allow us to disable the SOP for an individual request. That would give me the flexibility to decide where I want to perform an unsafe request and take the appropriate steps to make sure I don't get burned by the results. Perhaps the security experts out there can educate me as to what I've missed.

Monday, August 10, 2009

Google Apps allows for custom SMTP servers

I've been using my google account to store mail for a while now, including forwarded mail from client organisations, where this is allowed. The only problem that I've had so far is that when I'm sending mail as another identity (like a client one), it would always come up as "from me@brucecooper.net on behalf of me@the.right.domain" Its a bit annoying, but thats what Google had to do in order to be good email citizens and not get everything marked as spam. As it turned out, many Exchange servers automatically mark such email as spam anyway, so some people lost mail that I sent them.

Google recently introduced a new feature, whereby you can specify your own SMTP server to use when sending mail, and you can specify a different one for each identity. This is great, as you no longer get the on behalf of bullshit, and your mail doesn't get spammed any more. I was originally stumped by the fact that I couldn't get to the SMTP servers of the organisations that I wanted to send mail from, but then it dawned on me that I could use any SMTP server that would allow me to relay mail as other identities. I used my ADSL provider and all is good. Of course, if the organisations that I'm sending mail as had SPF rules set up to disallow this sort of stuff I'd still be in trouble, but so far that hasn't been a problem.

TramTracker has a Web Service!

I'm a fan of the TramTracker iPhone application. Its a little doohickey that fetches information from Yarra Tram's real time tram arrival service to tell you when your next tram will be coming. Having used it a lot over the last couple of months, I wondered last night how the app got its data. What service did it contact to fetch the data.

So I hooked up a logging proxy to my iPhone and traced the calls it was making. Turns out, TramTracker has a fully functioning and documented Web Service just sitting there begging to be used by people. I'm now trying to think of what other nice apps could be made. My first thought was an Android version of the TramTracker application, but that's just a clone (plus I don't own an android handset)... How about a dashboard widget? Oh, turns out there already is one! Its good to see that there is a qango out there that is doing its IT right for a change...

What other sorts of mashups could we make? One other thought I've had is making a Google Transit datasouorce from this information, but I suspect Google is already talking to Yarra Trams about that.

Saturday, July 11, 2009

The Brown Paper Collective Launches

Melissa and some of her colleagues are launching their new collaboration, the brown paper collective at 'this is not a design market' in Melbourne in July.

The Brown Paper Collective is a group of artists from the fields of drawing, glass and jewellery, and the market at which they will make their first group outing will be happening in Melbourne on Sunday the 19th of July at The Factory, 500 La Trobe St, Melbourne from 10am - 5pm.


Sunday, June 14, 2009

Online mind maps

I was going to write a blog post yesterday about where integration platforms were going, which seems to me to be online webapps without the need for an IDE at all. some products like Oracle's Aqualogic ESB are pretty much already there. I couldn't quite gather my thoughts properly though, so I thought I would do a mind map. I was going to use FreeMind, but considering I was talking about web-apps, I thought I'd do a search for whats out there. Turns out, there's heaps of online mind map editors, but most of them charge for anything but basic capabilities.

I did find mind42 though, which appears to be quite good. I think I'll be reccomending its use in the future. Below is the mind map that I'm working on. As you can see, I still haven't gathered my thoughts very well :-/ One of the nice things about this is that using an online service means that we can post links (just like the one below) in our Wiki documentation and have live updating when the mind map changes. Brilliant!

Thursday, June 11, 2009

The king is alive, and he's riding the tram network in Melbourne

Moments after taking this shot, the passenger folded up his Elvis cut out and got off the tram. Given that the cutout had well worn fold marks at the hips and knees I can't help wondering if he takes the king with him on all tram rides for company (or deterring other people from sitting next to him)

Either way it's hilarious

Tuesday, May 5, 2009

How to automatically forward email from Exchange without loosing headers

UPDATE: I've now created a service to make this much easier to forward email. If you are interested in this, please have a look at the service's site.

I've got a million email accounts. Every time I start work on a new client site, I get given yet another email account. Its a pain in the butt to manage all of these, so wherever possible I forward the mail onto my main gmail account where it can get filtered, stored and searched easily.

This works great for sites with unix based email, but more often than not my clients use Microsoft Exchange for their mail. You can set up a redirecting rule to forward the email (as long as the server has been set up to allow this), but when it does so it strips off the To: and CC: headers from the message when it sends it on. For example, if Bob Log had sent a message to Me@mycompany.com, SomebodyElse@mycompany.com, when it was forwarded on to my gmail account it would still appear to come from Bob Log but the To: would just be me@gmail.com, not the original recipients. This won't work!

I've toyed with a number of solutions to this problem, including writing a bot that uses EWS to poll the server, re-form the message and send it on, but that requires a bot to run on the client's lan and may not be able to forward on the message as getting access to Exchange to send the message can sometimes be difficult.

I now have a solution that works without requiring any additional software to be running on the client's network. The solution involves forwarding the message to an intermediate account as an attachment (which preserves the headers), filtering the message back out of the attachment at the intermediate account, then sending that message on to gmail in all its original glory. To do this, you require an intermediate email account that you can use purely for the purposes of filtering the mail, which is capable of piping incoming mail to a perl filter script. Generally, these aren't that hard to come by. I happen to have a getting started plan with Cove here in Melbourne which fits the bill nicely. Its free to run (although they do have a $2 setup fee) and its servers are in Australia which is a benefit for me. If you live elsewhere, there are probably other options that would do just as well.

To set it up, there are three steps,

step 1: create the filter script
Below is included a perl script which takes an email address as a parameter, and reads a MIME encoded email on STDIN. It will look for a part with a content-type of message/rfc822 which is an embedded message, and then it will stream that message out to sendmail with the supplied email address paramater as the final destination. This file should be uploaded onto your server somewhere where the mail filter can get hold of it.

I originally wrote a script that used the MIME::Parser perl module, but I've found that most hosting providers don't have that module installed, so it was easier to just do it from scratch. I'm not a perl programmer really, nor have I spent a lot of time on this script, so it definitely could be improved, but it works!
#!/usr/bin/perl

my $recipient = $ARGV[0];
my $boundary = '';
my $endMarker;
my $partType;
my $sendmail = "/usr/sbin/sendmail -oi $recipient";

# Reads a line from STDIN, and makes sure it isn't the EOF
sub fetchLine {
my $txt = <STDIN> or die "Reached end of file prematurely";
chomp $txt;
return $txt;
}

# reads a message part, looking for an rfc822 message.  If it finds one, it
# forwards it on to the recipient.  When it finds the part end, it returns 1
# if there are more parts, or 0 if it is the end of the message
sub parsePart {
my $isMessage = 0;
my $returnCode = -1;

# First, read the headers, looking for a content type
while ((my $text = &fetchLine()) ne '') {
if ($text eq 'Content-Type: message/rfc822') {
$isMessage = 1;
open(SENDMAIL, "|$sendmail") or die "Cannot open $sendmail: $!";
}
}
# Then read the body, streaming it out if it is a message
# End the loop when we find a boundary or the end marker
while ($returnCode == -1) {
$text = &fetchLine();
if ($text eq $boundary) {
$returnCode = 1; # Meaning we still have parts to parse
} elsif ($text eq $endMarker) {
$returnCode = 0; # Meaning we are finished parsing the message
} elsif ($isMessage) {
print SENDMAIL "$text\n";
}
}
if ($isMessage) {
close(SENDMAIL);
}
return $returnCode;
}

# First, Read Headers, looking for the multi-part content type and
# boundary separator
while ((my $text = &fetchLine()) ne '') {
if($text =~ m/^Content-Type: (.*)/i) {
$nextline = &fetchLine();
$nextline =~ m/\s+boundary="(.*)"/i or die "Could not get boundary after content type";
$boundary = "--$1";
$endMarker = "--$1--";
}
# We don't care about any other headers
}

# Check to see that we have the right mimetype and a boundary
die "No boundary found" if $boundary eq '';

# Skip until the first part separator
while ((my $text = &fetchLine()) ne $boundary) {
}

# Parse the message, looking for a part with a type of message/rfc822
while (&parsePart()) {
}

exit 0;

step 2: set up the filter in cpanel (or whatever else you use)
On your server, you now need to set up mail filtering so that any incoming mail from your work account that isn't a bounced message gets sent to your filter script. In cpanel, I did this by setting up an email filter for all mail, which looked something like this:



step 3: enable forwarding of mail in Exchange
Now that you've got your email forwarding filter set up, all that remains is to set up exchange to forward any incoming mail to your filter account. You do this by selecting Rules and Alerts



And then setting up a rule that looks like below:

Be careful with what you put in your rule definition, as some rules are "client only" which means they will only run when outlook is open. As an example, I tried to make it also mark the message as read when it forwards, but that is "client only" which means the rules won't run unless outlook is open :(

Once you've got that set up, you can test. Send a message to your work email and see if it makes it through. If anything goes wrong, it should bounce with a message telling you what went wrong. One thing I did notice though is that if I send the message from gmail itself, it tends to ignore the message when it gets forwarded through as it already has a copy (in sent), so I send test messages from an alternate account just to be safe.

Okay, so in summary, its possible to forward messages on from exchange, but it requires a man in the middle to extract the message contents, and its a bit fidlly. If you like this tip, let me know.

Sunday, May 3, 2009

Could I use my iPhone to work on?

I'm an IT consultant. As a result, I spend the vast majority of my time at work doing one of the following
  1. Reading or Composing Email
  2. Reading or writing Word Documents
  3. Editing our corporate Wiki
  4. Researching stuff (or skiving off) on the Web
  5. Looking at Microsoft Project plans
  6. Very occasionally coding... very occasionally
To perform these tasks, I lug around a quite heavy laptop. Its not a particularly special laptop, but it does the job. I would like to exchange it for something lighter and easier to work with in order to save my back, especially when I ride to work. I originally thought about getting a netbook. They seem to fit the bill nicely, except for a couple of annoying things:
  1. The screen is a tad small to be using all day long
  2. The keyboard could be considered small to be using all day long.
  3. They don't really have enough grunt to do coding
The first two problems can easily be solved by using an external keyboard, mouse, and screen. I always work in offices, so it generally isn't hard to find something that I can appropriate for the purposes of working while there. The third problem is a little more tricky. One way I've thought about solving this problem is using remote desktop to a server. Given that I generally need a server to code on anyway (I do enterprise SOA work), this seems to make sense. I simply log into the server (either via ssh or VNC/RDP) and I can do anything I would have originally wanted to do on my laptop, albeit with a little more lag. A RDP server would also allow me to get to those windows only tasks that I occasionally need to do without needing to bloat my netbook with software

This sounds great, and I might just do it, but why should I carry a wee little laptop around if I'm never going to use it as a laptop. I'll use my iPhone when I'm on the road, and plug the netbook into a KVM when I'm in an office. Why not just use the iPhone? I love my iPhone, and I carry it with me everywhere. It can do most of what I need to do just as well as a netbook, but it suffers from the smallness problems even worse than a netbook does. Why couldn't Apple make a docking station for the iPhone which allows it to work with an external keyboard, mouse and screen? That way, I can carry my phone around with me, get to work, plug it into the docking station, and work directly on my phone.

I think the docking station would need the following features to be successful:
  1. Be relatively small, so that it can be transported if necessary
  2. Provide charge to the iPhone while operating
  3. Have the following connectors:
    1. 1x Power - I would prefer an integrated transformer, but a wall wart would work too
    2. 3x USB - one for keyboard, one for mouse, one spare for something else...
    3. 1x DisplayPort, or DVI, or whatever to connect up a monitor
    4. 1x Ethernet port
    5. 2x Speakers
    6. Audio jacks for external speakers and mic
    7. RCA/Composite video out, so that it can do everything a current iPod dock does
    8. IR receiver for those cute little remotes.
    9. Possibly a phone handset to allow it to be used as a phone while docked. Perhaps just a jack to allow a handset (or hands free kit) to be plugged in
  4. Provide at least 1920x1200 resolution screen - this would probably involve improving the graphics card of the iPhone
  5. Be capable of receiving calls while in the dock. If the user removes the phone from the dock to receive a call, the user's session should be saved so that he can pick up where he left off when it is re-docked. Likewise, if I pull it out of the dock in the evening, take it home, and dock it again, my session should pop straight back up
  6. Be capable of running faster (at a higher clock speed) when plugged into power. iPhones are deliberately left running at a low clock speed to conserve battery power, but when plugged in they could easily ramp up.
I realise that iPhone apps, as they currently stand, would not be suitable for use on a large screen, but they could be adapted. Alternatively, the phone side and the desktop side could be kept largely separate, and there could be dedicated desktop applications (just a port of the normal os x version) along side the mobile versions. They would still need to synchronise app data (bookmarks for example), but that wouldn't be difficult to achieve.

I don't think this is an especially original idea. I know that other people have thought about doing it for ages. I just wish we could convince Apple to produce it as a product. Here's how I think we do it: tie ins to .mac. .mac is a good service, but most people don't want to fork out for what they can get for free elsewhere. If the iPhone plus had better integration with .mac it would make it a much more compelling offering. iDisk is a perfect example. Devices with limited storage need online storage. voilla!

Ahh apple, I doubt you will ever read this, but if you do, please make this device! I'll buy two. Lots of people I know will buy them. It'll be awesome.

Saturday, May 2, 2009

Tasmanian Holiday 2009

Melissa and I just got back from our holiday in Tasmania. We spent 5 nights in Cradle Mountain, 2 in Launceston, and traveled on the Spirit of Tasmania. As long as you don't mind loosing a night, the Spirit of Tasmania is not a bad way of travelling. Below are some blurry iphonecam shots from the trip

Thursday, April 23, 2009

Melissa's Thesis Corrections

Melissa got her thesis corrections back today. One correction in total, and that was for a typo.

Congratulations Mel. That's a great effort!

Friday, March 13, 2009

Melissa's Jewellery exhibition



My wife Melissa is having her final Masters of fine art exhibition happening on the 1st of April (no its not a joke) at Monash uni in Melbourne. Click on the image for more info and a slideshow.

Monday, February 23, 2009

More bike lanes for Melbourne

I saw this on the AGE this morning. Its good to see that cycling as a form of transport is getting some more attention these days. Melbourne used to be consistently rated as one of the world's best cities to live in, based in no small part on the excellent transport options. Due to decades of under-investment we no longer can lay claim to that title. Maybe we can start going back in the right direction.

Monday, January 12, 2009

Present Time



When I started trying to loose weight, I said to myself that if I reached a weight milestone, then I would get myself a present, paid for by my tax return. It couldn't be a present that would undo all of my good work, so chocolates and wine were out, rather it had to be something that would excite me but also enable me to continue getting fitter and loose weight.

The reward weight I chose is 80Kg. Its not my goal weight, as I would need to go further still to get rid of by belly, but rather it was intended to be an interim reward.

Well, I was getting close to the reward weight yesterday, so I went down to Goldcross cycles in Richmond, Victoria and bought a 2008 Fuji Team bike, based on the assumption that I wouldn't be able to pick it up straight away and by that time I would probably hit my weight. To my surprise, I got there today, so I'm very happy.

Tuesday, January 6, 2009

Sunday, January 4, 2009

Cloud Computing in Development

Virtualisation is a pretty commonly known practice these days.  IT operations staff use it as a method of consolidation and getting rid of old legacy hardware.  Now we are being presented with virtualisation on demand facilities, usually referred to as Cloud Computing.   This allows any user to create a virtual machine as a clone of a disk image at any time, use it for a while, and then throw it away.  This presents some interesting opportunities to streamline the development process, especially on large distributed systems such as one would find in a SOA style architecture.

When developers are working on a given task, they will generally wish to work in their own environment which is not subject to any other changes (and starts, stops) that other developers may be making at the same time.  This becomes more and more important as the size of a development team increases, as more and more changes will be put into the system every day.  Traditionally, each developer will run a copy of the software on their own development machine, or on a shared development server.  Every couple of days, if the Continuous Integration system indicates that the software is in a working state, the developer will update his system with everyone else's changes, and can continue developing on an up to date copy of the software.  Depending on the level of changes, this could take a significant amount of time.

If each developer in the team has their own environment, we quickly reach the point where there are dozens of environments, all running slightly different versions of the system.  If we add in the system testing environments, it all becomes very complicated very quickly.  On a recent project, we had a total of 40 developers working on a distributed Web Service based system, co-operating together to provide a business capability. Performing a full build from scratch took approximately one and a half hours.  Every few days, each developer would spend this time getting his system up to date.  We also had two engineers, working almost full time on keeping the system test environments up to date, along with managing the other aspects of environment management (operating system updates, testing bug fixes provided by software vendors etc...).  This is a lot of time spent on just keeping the environment up to date.  To make things worse, if a problem is discovered during the build process or if a bug sneaks into the system, the person maintaining the environment is faced with the prospect of going through the entire build process again to revert to an older copy the software.

This is where Cloud Computing can help, or rather two important concepts from it: Disk Snapshots, and the ability to quickly create computing environments.  The concept is quite simple: When a user (developer or tester) wants an up to date environment, he goes and finds the disk image of the last known good Continuous Integration build, clones it, runs it as a virtualised environment and voilla!  No waiting around for one and a half hours... no worrying about whether the build has completed successfully or not.  No wondering whether the little experiment you did last Thursday has affected the operation of your system because you just created a new completely fresh environment.


When a new tool is required for the development environment, or a new
version is released, instead of instructing each developer to install
it separately, all that has to be done is to update the base image and
the next time each developer creates an environment, he will
automatically pick up any changes that have been made. 


The situation is just as good for the system testing environments, as generally we want all of our system testing environments to be identical.  Instead of having to build each environment separately, we simply build it once, and clone it as many times as we need it.

Snapshots and Cloud Computing also has the advantage of making very efficient usage of the available computing resources.  In particular, it will be very efficient on disk storage.  By using copy-on-write volumes, each environment will only require disk storage for what has changed between itself and the base image on which it is based.  Because the environments will be 99% the same, each environment will not use very much storage at all.  One terabyte disks are commonplace now, and would be capable of storing hundreds of disk images.

But the best advantage of using this approach is that it drastically reduces the workload on the project's environment engineers.  If a change is required, or a particular developer needs another environment (say to test out operation in a clustered environment), he can do it himself.  The results of the few tasks that he still needs to perform are also much more scalable.  He can manage a project with 100 engineers almost as easily as he could for a project with just 10.

Of course, there are a few things that need to be done to your environments in order to be able to support operating in a cloud computing system, especially in the presence of cloning.  For example, Oracle's application server OC4J stores the hostname and IP address of the server in its configuration.  This will need to be changed each time the disk image is cloned.  Many cloud computing environments (including EC2) do not support Multicast either, so alternative methods must be found for managing clusters.  None of these problems are insurmountable however.

More of a problem is the licensing arrangement for Application Servers.  Some vendors do not charge license fees for development environments, which is great.  Others, such as Oracle, charge for each server, or each named developer.  It is difficult to reconcile this licensing model with a cloud computing system, where environments come and go very often.

The final challenge is organisational.  Some development shops are not set up in a way that makes it easy to use cloud computing services.  It may not be possible to get to EC2 from your intranet, or management (or even your client) may be nervous about running software on computers that are not under their direct control.  To get around this, you might be able to set up your own virtualisation cloud within your organisation.  Its not that hard to do, and depending on how sophisticated you make the setup you may get most of the benefits you would see from using a real cloud computing provider.

First up, we need to get some shared (NAS) storage on which we can store our disk images.  Because different people on different computers will be wanting access to the images, we need a way of getting to disk images over the network. ATA over Ethernet (AoE) is a way of getting a central file server (NAS) to which other computers (the virtualised ones) can access over the network and pretend that it is a normal drive.  iSCSI is another option, which is more standards complient, works over routed networks instead of just the local Ethernet segment, but is a little more resource hungry.  Both are supported well supported by Linux so should be easy to use.

Whatever NAS solution we choose should also support Copy on Write snapshots of disk images.  Linux LVM has got image support, but performance will drop as the number of snapshots increases.  A better solution would be to use ZFS, which comes with OpenSolaris.  ZFS has very good snapshot support, along with other new and exciting storage features, but OpenSolaris only supports iSCSI, not AoE.  Thats fine, as the xVM virtualisation solution we are about to talk about has iSCSI support out of the box.  Once the image snapshot box is set up, it is important to ensure that spare parts are kept and backups are taken, as it becomes a central point of failure for all of your developers.

So, now we've got our snapshot storage sorted out, how are we going to do the virtualisation?  If we were going to set up a full cloud, we could use Eucalyptus.  That would allow us to centralise all of our virtual environments and provide proper scalability.  Each developer is likely to only need one environment though, and even laptops these days have enough oomph to run at least one virtual environment.  So why don't we allow our developers to run the virtualised environment directly on their own machines?  Sun provide a virtualisation solution called xVM Virtualbox which can source its disk images via a built in iSCSI driver.  It also has a command line utility, which makes it very easy to include in scripts.  Perfect.

All that is left now is to produce the tools that will allow users (and Continuous Integration) to manipulate disk images from their desktops, and set up a base level development image.  As the technique is intended for internal development, we have chosen to use shell scripts and SSH to run our scripts, with a Web application fronting it to allow for manual management.  Setting up the images can be tricky.  Luckily Oracle have already set up some oracle Fusion images.  You can download the image and convert it to an VDI image (Instructions to follow).

So, we can set up a virtualised environment with snapshot management for use in development using only one extra server and a few cheap drives.  This provides a very easy way of setting up this sort of development environment without having to go down the sometimes difficult path of convincing your company to fully embrace cloud computing.