My auto-update worked!
… Not that I expected it not to, but last night was the first time I’d actually had it run, on it’s own, in a live service!
- It made the FTP connection & found the new file
- It munged the downloaded data into an update file
- It copied the current database and updated the new version
- It switched the service to the new database
- It updated the news ticker on the login page
<Does the happy dance />
You gotta love it…. it turns up everywhere:
The last 20% of getting a service just right for launch will take 80% of the allocated time.
Yes folks – I’m in the tidy-up / fettle / tweek phase, and they are all nasty wee fiddly bits: something doesn’t work quite right in IE; a button needs to be moved over just so; words used in help pages; etc…
Creating a new service, or indeed even just changing the visuals for an old service, is never a simple task.
Firstly you have to come up with a coherent design: one that compliments the contents of the site; one that works across all pages within the site; one that is visually appealing; and one that all the parties involved are happy with… Not my forte: I get a designer to do that.
Next you need to modify this design so that it can be created as a validating, accessible, cross-platform base.
What I like to do is then mock-up a few pages, maybe half a dozen, as proper xhtml pages – so that people can poke and prod them, be happy that they render across all browers and platforms, that they scale with font sizes, and that they look fine on different monitor/window sizes.
I’ve done all that, I’ve created the 100+ web pages… and now I’m going through the laborious task of verifying each one actually validates, and actually renders correctly under a range of browsers.
… and who said writing web pages was easy? <chuckle />
I had an interesting chat with a self-confessed Old School academic: he’s in a deeply unfashionable area of research, and publishes in deeply unfashionable journals…. but he makes sure that everything he publishes goes into his local Institutional Repository.
I ran my idea of a CRIS-like system past him, and he spotted an immediate flaw: “It’s mine!”
He will not share anything until has been published. He will not put unpublished work anywhere that it can be got at1. The problem is that your unpublished work can be plagerised, and published, before you finish your work… meaning that you are now plagerising someone else – on your own research!
I asked him about copies of his work, and if he keeps them on the fileservers in his college: Nope, he keeps them on a removeable hard disk, which he takes home with him every night.
So where does that leave us?
- I think we need to accept that that old school have a point: plagerism is rife, and not just at undergrad level – it happens at all levels of academia.
- I think that the “google generation” will be less paranoid about their work… and more aware of computing systems (on which: who else noticed that Peter Murray-Rust mentioned having disk-level encription on his laptop when giving his presentation at OR08?).
- I think that the idea of providing an backup (or archive) for “work in progress” is valid, and that the idea of a hierarchical system can be sold.
BUT (and you notice it is a pretty damn big “but”), we will need to be sure that the archive is secure, that work cannot be copied, and that the academic feels firmly in control.
On another topic
My friend was hugely supportive of his local repositorty: not only were the staff excellent at handling the deposit and sorting out all the metadata stuff for him; but he was actually able to raise the profile of his work!
He drums into his students two messages when it comes to publications:
- Do NOT release anything into the public domain until your work has been definitely accepted
- Make sure you put a copy into the local IR: the more people find your work, the greater the pool of people who might cite your work: a 1% citation rate from 10 people is 1-in-10; a 1% citation rate from 100 people is 1: a 10-fold increase!
 He told me a story of, when he was in China over the summer, a student submitted a piece for his Masters degree. A quick read of it showed that this was an incomplete work, by someone else. Further, fairly simple, investigation revealed it was written by a PostDoc, in a US University, and was going through it’s final review process.
Now, as you may have gathered, I work in academia…. which means the public sector.
When I build a web service, I have two sets of customers:
- I have the people who are funding the service, my organisation, and the people who define the service guidelines: they expect a certain level of functionality; a certain level of reliability; and that the service enhances their reputation(s) with it’s good looks and slick behaviour
- I also have the people who will be using the service, mainly staff & students of HE and FE organisations. JISCs new policies are also rolling out services to the schools sector, and some services (depending on the funding streams) are even open to the general public to use
This presents an interesting challenge:
If my customer base is “anybody”, I cannot discriminate against anybody – which means that the basic functionality of the service needs to be universally available (given the restrictions of the web protocols)
- The core interface must work irrespective of platform, browser, or ability of the user (within reason)
I have no problems with additional features being available to visual browsers, in a point-and-click interface… heck, if you want to have a Virtual Reality interface, with 3D rendering and swooping’n’flying – go right ahead SO LONG AS THE BASIC FUNCTIONALITY IS AVAILABLE WITHOUT IT
If you want to write, and maintain, two interfaces – go right ahead SO LONG AS THE BASIC FUNCTIONALITY IS AVAILABLE IN ONE
I’m sorry – but unless I have a service that works for “anybody”, I’m discriminating.
Accessibility is not about getting what you want looking good for you. Accessibility is about making it available for “anybody”.
…. you gotta love them!
I’m rebuilding a multi-million record database, as the supplier has switched data formats.
Each step is trivial: simple to do; well documented; and a complete step in their own right… but when your processing 35 years worth of data in 64 files into 24 sub-databases, this takes time.
… and then one goes wrong!
Somewhere in 10-million-odd lines of sgml is something throwing a perl script… possible a spurious Unicode character.
There are plans afoot for a three-day developer-focused conference (four days, if you count the plans for a pre-event “Alpha Developer Day”: aka code-monkey workshops).
“The primary activity of the event will NOT be the typical conference with scheduled presentations. Rather, the focus should be upon providing a venue where collaborative learning and participation can take place in an open forum.”
I admit a certain fondness for these sorts of events: I go to too many where the managers & senior staff sit around and promise blue skies and clear sailing…. without any consideration to the cold hard realities of working at the pointy end of the code-face.
I’ve already been in touch with the organisers and seen the draft plan for the event… it’s looking good. I’m guessing London for the location, given the committee
I’ll see you there 🙂