Welcome to the blog Hitech Tips

A blog about web and IT. Sharing things I use at work.
Showing posts with label Software. Show all posts
Showing posts with label Software. Show all posts

Windows Live Writer: formatar imagens

Posted by j. manuel cordeiro Friday, November 19, 2010 0 comments

Este tutorial explica como formatar imagens nos posts escritos com o Windows Live Writer.

Nível de dificuldade: baixo

 

Comecei por detestar a abordagem da suite Live, especialmente do Live Messager, que tornava publica informação minha sem meu consentimento (sim, pode-se ir à configuração e mudar mas o default é abusivo). Mas o Live Writer é uma killer app e já escrevi alguns posts sobre a forma de o usar.

Já abordei a forma como se instala este software e alguns aspectos da sua utilização (sobre captura de imagens e sobre o ping para o Twingly e para o Wordpress). Agora debruço-me sobre a inclusão de fotos nos posts. Este aspecto é um pouco manhoso mas depois de umas configurações, é impec.

As imagens seguintes reflectem as configurações que uso regularmente para as imagens. Experimente defini-las (para isso, inserir uma imagem e depois clicar nela para activar os painéis de opções) e depois clicar no guardar como predefinições. Desta forma, a próxima imagem que inserir estará com as definições que deseja.

 

NOTA: estas imagens são copiadas do Windows Live Writer XP. A última versão (Windows Live Writer 2011) tem um aspecto diferente mas os conceitos são os mesmos. Basta um pouco de esforço para encontrar as mesmas coisas.

 


Imagem 1

- inline para o uso habitual no blog (por vezes coloco Esquerda para o texto contornar a image)

- sem margens (fica o default do blog)

- limites (border): gosto da imagem com uma linha à volta. o papel fotográfico tb é interessante especialmente quando o centro do post é a fotografia

- hiperligação: na maior parte dos casos não quero nenhuma ligação. mas pode ter interesse ter a ligação para um URL (exemplo: recorte de uma notícia e link para a origem) ou então para a imagem de origem (neste caso ver a imagem 6)


Imagem 2

- tamanho médio (clicar no esquadro para definir o que é tamanho médio (ver a imagem 3)

- incluir um texto descritivo da imagem (ajuda o google na pesquisa de imagens)


 Imagem 3
- definir os valores dos tamanhos pré-definidos.
- no Aventar, uma imagem não deve passar muito dos 550 pixéis de largura (se não passar por cima da barra lateral ou fica cortada).

Imagem 4
nada de especial a referir mas ver a lista de efeitos possíveis

Imagem 5
lista de possibilidades de hiperligações das imagens

Imagem 6
na maior parte dos casos, quero que, quando a imagem liga para a imagem de origem, seja apresentada a imagem no seu tamanho original. fazer esta pré-definição aqui.

Imagem 7
Depois de feitas as pré-definições, clicar no guardar como predefinições e assim a próxima imagem inserida terá de imediato estas características

 


UM TRUQUE
Por vezes quero inserir uma série de imagens iguais mas diferentes destas predefinições. Exemplo: incluir 10 imagens com tamanho 100x100 e a apontar para a imagem de origem mas redimensionada para um máximo de 800x600. Para poupar tempo, insiro a primeira imagem, formato-a assim e, tendo-a seleccionada clico no guardar como predefinições. Desta forma as próximas 9 imagens que inserir terão as mesmas características. Depois volto a colocar as predefinições genéricas a usar nas restantes postagens.


Fica aqui um link com informação detalhada sobre como instalar o Windows Live Writer. 

http://hitech-tips.blogspot.com/2010/02/bloggers-prestem-atencao-ao-windows.html

E aqui mostro como fazer capturas de ecrãs:

http://hitech-tips.blogspot.com/2010/02/faca-posts-de-forma-rapida-e-eficaz.html

Sobre este programa de captura de ecrãs (há muitos), é de realçar um aspecto muito prático e que é o facto do software poder gravar automaticamente em disco a imagem capturada. É especialmente se se estiverem a fazer várias capturas. Também permite copiar automaticamente para o clipboard a imagem capturada (muito útil também). No anterior link explico como se faz isso.


Boas postagens :)

 

Links para download do Windows Live Writer:

image

 

Há uns tempos que esta mensagem de erro me chateava de cada vez que arrancava o computador. Finalmente dei-me ao trabalho de procurar uma solução:

How to change your drive letter assignments in Windows XP or Vista to fix the "Windows - no disk" etc error message, and how to uninstall your floppy drive

URL shortner: escolha o serviço certo

Posted by Mr.Editor Wednesday, March 17, 2010 1 comments

URL Shorteners Slow Down The Web – Especially Facebook’s FB.me

URL Shorteners Slow Down The Web – Especially Facebook’s FB.me

 

Todos conhecem os serviços que tornam os URL mais curtos. Especialmente os utilizadores do Twitter. Escolha no entanto o serviço com ponderação. O goo.gl (da Google) é o melhor nos dois critérios apresentados. Mas não está disponível como um serviço autónomo. Em contrapartida, pode ser acedido através de:

Mais detalhes sobre o goo.gl num post do blog oficial da Google: Making URLs shorter for Google Toolbar and FeedBurner

 

Gráficos e texto mais detalhado: URL Shorteners Slow Down The Web – Especially Facebook’s FB.me

Google Maps adds bike routes

Posted by Mr.Editor Wednesday, March 10, 2010 0 comments

imageTo create the mapping tool, Google developed an algorithm that uses several inputs — including designated bike lanes or trails, topography and traffic signals — to determine the best route for riding. The map sends you around, not over, hills. But if you really want to tackle that Category 1 climb, you can click and drag the suggested route anywhere you like, just like you can with pedestrian or driving routes. Users can suggest changes or make corrections to routes using the ever-present “report a problem” feature on Google Maps.
Read More http://www.wired.com/autopia/2010/03/google-maps-for-bikes/#ixzz0hnFEmFrg

How to Choose Between Joomla, Drupal and Wordpress

Posted by Mr.Editor Saturday, March 6, 2010 4 comments

image

This post presents three summaries on how to choose between three content managers systems. These summaries provide you a quick overview but for deeper insight follow the links to the original articles.

The links for these CMS are

Enjoy and give your feedback.

1. From compassdesigns.net:

image

 

2. From goodwebpractices.com:

Wordpress Pros

  • Simple to use - No need for modifications
  • Excellent for blogging or sharing thoughts in a sequential manner
  • Even the most elderly of users can get the hang of it quickly

Wordpress Cons

  • Not developer friendly
  • The community seems to like to complain
  • Upgrades bring more bugs than fixes sometimes

 

Drupal Pros

  • Extremely developer friendly. If I loved code more I would almost always pick this system.
  • Strong community to help discern the dozens (hundreds) of functions and tags available.
  • Can be used to create some really awesome websites that can outperform a majority of other sites out there.

Drupal Cons

  • Not very designer and user-friendly. It's hard for someone with little code knowledge to make the leaps required to do the very cool things that Drupal is becoming known for.
  • Theming of Drupal has been a huge case of fail (until recently). Probably because it has been developers, not designers, that are making the themes.
  • Getting a Drupal website published could cost you more time, and thus more money, than Wordpress or Joomla.

 

Joomla Pros

  • Friendly for all types of users - Designers, Developers and Administrators
  • Huge community is awesome for assisting with creation of websites
  • Has been rapidly growing and improving itself for the past three years

Joomla Cons

  • Still not user-friendly enough for everyone to understand
  • Not quite as powerful as Drupal, and can be a bit confusing for some to jump into
  • Recently rebuilt the entire system from ground-up, and so there are still many out there sticking to the old versions (1.0.x)

From drupal.org

(yeah, it's one of three opinion's but they are realistic)

Drupal
* Rock solid & high quality platform
* Real multi-site-feature (only one installation for several sites)
* Any Kind of user groups & user permissions, OpenId compliant in Version 6
* Can run membership and community sites, not only CMS etc
* Powerful templating system. Any XHTML or CSS template can be easily converted to Drupal.
* Drupal needs a little time investment to realize all the huge possibilities of Drupal
* Clear, high quality code and API (easy to integrate with other solutions etc)
* Flexibility and no known limitations
* Many high profile sites use Drupal (e.g.: MTV UK, BBC, the Onion, Nasa, Greenpeace UK, New york observer. )

Joomla
* If you are not techy its good to start
* Easy install & setup with your mouse
* Easy learning curve
* Cannot integrate other scripts etc. to your site
* Generally you cannot create high-end sites, without investing huge amount
* No SEO out of the box, URLs are not search engine friendly.
* Server resources utilization is more compared to drupal
* Only one site per installation
* No Single Log-in to several sites
* No User groups & permissions
* More intuitive administration user interface
* Some polished modules for things like calendars, polls, etc.
* Modules cost you money

System Requirements:

* Drupal can work with MySQL and Postgres while Joomla is known to support only MySQL
* Drupal can work with Apache or IIS while Joomla is known to support only Apache
* Joomla support SSL logins and SSL pages. Drupal not known to support it.

Site Management

* Drupal has free add on for Workflow management. Joomla not known to have one.
* Drupal has inbuilt Translation manager. Joomla has a Free ad on for the same
* Drupal has more granular priviledge managment

Interoperability:

* Drupal has iCal support [Add on] , Joomla not known to have one.
* Drupal is XHTML Complaint. Joomla not known to be one.
* Drupal has excellent versioning and Audit trail which Joomla lacks

Browsers share (among web dev people)

Posted by Mr.Editor Friday, March 5, 2010 0 comments

Here's some stats about the browsers share (among web dev people).  Specifically, this data is collected from the visitors of the site W3Schools. So have this in consideration:

W3Schools is a website for people with an interest for web technologies. These people are more interested in using alternative browsers than the average user. The average user tends to use Internet Explorer, since it comes preinstalled with Windows. Most do not seek out other browsers.

These facts indicate that the browser figures above are not 100% realistic. Other web sites have statistics showing that Internet Explorer is used by at least 80% of the users

Browser Statistics Month by Month:

Month IE7 IE6 IE5 IE ALL Firefox Chrome Safari Opera
January, 2008 21.20% 32.00% 1.50% 54.70% 36.40%   1.90% 1.40%
February, 2008 22.70% 30.70% 1.30% 54.70% 36.50%   2.00% 1.40%
March, 2008 23.30% 29.50% 1.10% 53.90% 37.00%   2.10% 1.40%
April, 2008 24.90% 28.90% 1.00% 54.80% 39.10%   2.20% 1.40%
May, 2008 26.50% 27.30% 0.70% 54.50% 39.80%   2.40% 1.50%
June, 2008 27.00% 26.50% 0.50% 54.00% 41.00%   2.60% 1.70%
July, 2008 26.40% 25.30%   51.70% 42.60%   2.50% 1.90%
August, 2008 26.00% 24.50%   50.50% 43.70%   2.60% 2.10%
September, 2008 26.30% 22.30%   48.60% 42.60% 3.10% 2.70% 2.00%
October, 2008 26.90% 20.20%   47.10% 44.00% 3.00% 2.80% 2.20%
November, 2008 26.60% 20.00%   46.60% 44.20% 3.10% 2.70% 2.30%
December, 2008 26.10% 19.60%   45.70% 44.40% 3.60% 2.70% 2.40%
January, 2009 0.60% 25.70% 18.50% 44.80% 45.50% 3.90% 3.00% 2.30%
February, 2009 0.80% 25.40% 17.40% 43.60% 46.40% 4.00% 3.00% 2.20%
March, 2009 1.40% 24.90% 17.00% 43.30% 46.50% 4.20% 3.10% 2.30%
April, 2009 3.50% 23.20% 15.40% 42.10% 47.10% 4.90% 3.00% 2.20%
May, 2009 5.20% 21.30% 14.50% 41.00% 47.70% 5.50% 3.00% 2.20%
June, 2009 7.10% 18.70% 14.90% 40.70% 47.30% 6.00% 3.10% 2.10%
July, 2009 9.10% 15.90% 14.40% 39.40% 47.90% 6.50% 3.30% 2.10%
August, 2009 10.60% 15.10% 13.60% 39.30% 47.40% 7.00% 3.30% 2.10%
September, 2009 12.20% 15.30% 12.10% 39.60% 46.60% 7.10% 3.60% 2.20%
October, 2009 12.80% 14.10% 10.60% 37.50% 47.50% 8.00% 3.80% 2.30%
November, 2009 13.30% 13.30% 11.10% 37.70% 47.00% 8.50% 3.80% 2.30%
December, 2009 13.50% 12.80% 10.90% 37.20% 46.40% 9.80% 3.60% 2.30%
January, 2010 14.30% 11.70% 10.20% 36.20% 46.30% 10.80% 3.70% 2.20%
February, 2010 14.70% 11.00% 9.60% 35.30% 46.50% 11.60% 3.80% 2.10%

 

Browsers share (among web dev people)

(click to zoom)

Data source: http://www.w3schools.com/browsers/browsers_stats.asp

 

Basically, FF has stop growing since April 2009, IE is continuously declining an Chrome is gaining ground. But please note the remark at the begin of the text: this is among the visitors of the site W3Schools.

 

For the global market, these are the numbers:

From: http://it-chuiko.com/internet/2659-top-5-browsers-data-for-february.html

Uma razão para ficar colado ao Firefox

Posted by Mr.Editor Monday, March 1, 2010 2 comments

image

Bem, haverá várias. Mas há uma funcionalidade que não encontro em nenhum outro browser

e que me deixa colado ao Firefox, não arredando pé até haver igual noutros. Falo da possibilidade de pesquisar na página mal comece a escrever (se precisar de fazer CTRL+F, CTRL+L ou equivalentes).

 

É uma opção que não está activada por omissão mas para mim é uma delícia. Activa-se em (ver imagem supra): Tools | Options | Advanced | General | Search for text when I start typing.


A beleza desta funcionalidade é, por exemplo, fazer uma pesquisa no Google, abrir uma das páginas da pesquisa, escrever aquilo que se procura e, plim!, está-se lá. É rápido. Depois, basta fazer CTRL+G para ir para a ocorrência seguinte.

image

Feedly organizes your favorite sources in a magazine-like start page.


A magazine-like start page. A fast and stylish way to read and share the content of your favorite sites and services. Provides seamless integration with Google Reader, Twitter, Delicious, YouTube and Amazon.

 

Having it all in one page is nice. Also, specific categories can be added: check the tool bar

image

 

And clicking at the arrow of the toolbar, other tools become available:

 

image image

The karma is nice and shows information about the number of clicks on some subject:

 

image

I suspect that this isn't very accurate but can be used to get an idea of the popularity of your posts.

 

 

Another thing I enjoy is the summary capability, showing the sites I follow in a list approach, making it easy to decide if it is worth reading:

image

 

So, that's it. Enjoy.

Making Facebook 2x Faster

Posted by Mr.Editor Sunday, February 21, 2010 0 comments

An interesting post from the Facebook engineers about how they made the Facebook twice as faster.

 

Making Facebook 2x Faster

by Jason Sobel (notes) Thu at 11:25pm

Everyone knows the internet is better when it's fast. At Facebook, we strive to make our site as responsive as possible; we've run experiments that prove users view more pages and get more value out of the site when it runs faster. Google and Microsoft presented similar conclusions for their properties at the 2009 O'Reilly Velocity Conference.
So how do we go about making Facebook faster? The first thing we have to get right is a way to measure our progress. We want to optimize for users seeing pages as fast as possible so we look at the three main components that contribute to the performance of a page load:network time, generation time, and render time.

Components Explained


Network time represents how long a user is waiting while data is transmitted between their computer and Facebook. We can't completely control network time since some users are on slower connections than others, but we can reduce the number of bytes required to load a page; fewer bytes means less network time. The 5 main contributors to network time are bytes of cookies, HTML, CSS, JavaScript, and images.
Generation time captures how long it takes from when our webserver receives a request from the user to the time it sends back a response. This metric measures the efficiency of our code itself and also our webserver, caching, database, and network hardware. Reducing generation time is totally under our control and is accomplished through cleaner, faster code and constantly improving our backend architectures.
Render time measures how much time the user's web browser needs to process a response from Facebook and display the resultant web page. Like network time, we are somewhat constrained here by the performance and behavior of the various browsers but much is still under our control. The less we send back to the user, the faster the browser can display results, so minimizing bytes of HTML, CSS, JavaScript, and images also helps with render time. Another simple way to reduce render time is to execute as little JavaScript as possible before showing the page to the user.
The three metrics I describe are effective at capturing individual components of user perceived performance, but we wanted to roll them up into one number that would give us a high level sense of how fast the site is. We call this metric Time-to-Interact (TTI for short), and it is our best sense of how long the user has to wait for the important contents of a page to become visible and usable. On our homepage, for example, TTI measures the time it takes for the newsfeed to become visible.

First Steps


From early 2008 to mid 2009, we spent a lot of time following the best practices laid out by pioneers in the web performance field to try and improve TTI. For anyone serious about making a web site faster, Steve Souders's compilations are must-reads: High Performance Web Sitesand Even Faster Web Sites. We also developed some impressive technologies of our own to measure and improve the performance of Facebook as described at the 2009 O’Reilly Velocity Conference by two Facebook engineers, David Wei and Changhao Jiang.
By June of 2009 we had made significant improvements, cutting median render time in half for users in the United States. This was great progress, but in the meantime, Facebook had exploded in popularity all across the globe and we needed to start thinking about a worldwide audience. We decided to measure TTI at the 75th percentile for all users as a better way to represent how fast the site felt. After looking at the data, we set an ambitious goal to cut this measurement in half by 2010; we had about six months to make Facebook twice as fast.

Six Months and Counting...


On closer inspection, our measurements told us that pages were primarily slow because of network and render time. Our generation time definitely had (and still has) significant room to improve but it wouldn't provide the same bang for the buck. So we devoted most of our engineering effort towards two goals: drastically cutting down the bytes of cookies, HTML, CSS, and JavaScript required by a Facebook page while also developing new frameworks and methodologies that would allow the browser to show content to the user as quickly as possible.
Cutting back on cookies required a few engineering tricks but was pretty straightforward; over six months we reduced the average cookie bytes per request by 42% (before gzip). To reduce HTML and CSS, our engineers developed a new library of reusable components (built on top of XHP) that would form the building blocks of all our pages. Before the development of this component library, each page would rely on a lot of custom HTML and CSS even though many pages shared similar features and functionality. With the component library, it’s easy to optimize our HTML in one place and see it pay off all across the site. Another benefit is that, since the components share CSS rules, once a user has downloaded some CSS it’s very likely those rules will be reused on the next page instead of needing to download an entirely new set. Due to these efforts, we cut our average CSS bytes per page by 19% (after gzip) and HTML bytes per page by 44% (before gzip). These dramatic reductions mean we get our content to users faster and browsers can process it more quickly.
Cutting back on JavaScript was another challenging problem. Facebook feels like a dynamic and engaging site in large part due to the JavaScript functionality we've created, but as we added more and more features, we wrote more and more JavaScript which users have to download to use the site. Remember that downloading and executing JavaScript are two of the main issues we need to combat to improve network and render time. To address this problem our engineers took a step back and looked at what we were using JavaScript to accomplish. We noticed that a relatively small set of functionality could be used to build a large portion of our features yet we were implementing them in similar-but-different ways. This common functionality could be provided in a very small, efficient library that is also cacheable on the user's computer. We set out to rewrite our core interactions on top of this new library, called Primer, and saw a massive 40% decrease (after gzip) in average JavaScript bytes per page. Since Primer is downloaded quickly and then cached for use on future page views, it also means that features built exclusively on Primer are immediately usable when they appear on the screen; there's no need to wait for further JavaScript to download. An example of such a feature is our feedback interface which allows users to comment on, like, and share content and appears all across Facebook.
Another project I'd like to highlight requires a little more setup. As described earlier, the traditional model for loading a web page involves a user sending a request to a server, the server generating a response which is sent back to the browser, and the browser converting the response in to something the user can see and interact with. If you think about this model there is a glaring problem. Let's say it takes a few hundred milliseconds for the server to completely prepare and send a response back to the user. While the server is chugging through its work the browser is just sitting there uselessly, waiting for something to do and generally being lazy. What if we could pipeline this whole procedure? Wouldn't it be great if the server could do a little bit of work, say in ten or fifty milliseconds, and then send a partial response back to the browser which can then start downloading JavaScript and CSS or even start displaying some content? Once the server has done some more processing and has produced another bit of output it can send that back to the browser as well. Then we just repeat the process until the server has nothing left to do. We've overlapped a significant portion of the generation time with the render time which will reduce the overall TTI experienced by the user.
Over the last few months we've implemented exactly this ability for Facebook pages. We call the whole system BigPipe and it allows us to break our web pages up in to logical blocks of content, called Pagelets, and pipeline the generation and render of these Pagelets. Looking at the home page, for example, think of the newsfeed as one Pagelet, the Suggestions box another, and the advertisement yet another. BigPipe not only reduces the TTI of our pages but also makes them seem even faster to users since seeing partial content earlier feels faster than seeing complete content a little bit later.

Success!


I'm pleased to say that on December 22nd, as a result of these and other efforts, we declared victory on our goal to make the site twice as fast. We even had 9 whole days to spare!

After hitting the 2x site speed goal the team celebrated with t-shirts. And dinner (not pictured).

I hope that you've personally experienced and appreciated the improvements we've made in site speed and that this post has given you some insight in to how we think about and approach performance projects at Facebook. Stay tuned for more details on many of the projects I mention here in future blog posts and industry conferences. In 2010 look for the site to get even faster as we tackle new challenges!
Jason, an engineer at Facebook, wants to remind you that perf graphs go the wrong way.

Porque falham os projectos de software?

Posted by Mr.Editor Friday, February 19, 2010 2 comments

Software projects Os projectos de software, como quaisquer outros, por vezes falham. Projectos de longa duração (anos) e de maior dimensão têm maior risco de serem cancelados. E é precisamente quando o prazo de conclusão se aproxima que a decisão acaba por se colocar em cima a mesa: vale a pena investir mais uma pipa de dinheiro num projecto que não vai estar concluído no prazo esperado ou para o qual as funcionalidades serão substancialmente diferentes daquelas que se havia projectado? Esse é o momento em que se faz mais um esforço ou em que se decide secar o sorvedouro. A título de exemplo, estatísticas mostram que os projectos com pleno sucesso (terminados no prazo e dentro do orçamento) andarão pela casa de apenas 30%.

O artigo no fim deste texto, em inglês, é uma interessante dissertação sobre o tema. Entre os factores de falha apontados no artigo, estes são destacados:

  • Objectivos irrealistas ou desarticulados para o projecto
  • Inadequada estima dos recursos necessários
  • Requisitos de sistema mal definidos
  • Informação deficiente quanto ao estado do projecto
  • Riscos não geridos
  • Comunicação deficiente entre cliente, implementadores e utilizadores
  • Uso de tecnologia imatura
  • Incapacidade para gerir a complexidade do projecto
  • Práticas de desenvolvimento desleixadas
  • Deficiente gestão de projecto
  • Parceiros de negócio políticos
  • Pressões comerciais
Vem esta conversa a propósito do anuncio do governo em gastar 15 milhões de euros num projecto de software para o Ministério da Educação e em gastar outros 15 milhões na manutenção operacional deste software durante quatro anos.

15 milhões para desenvolver um projecto de software é uma pipa de dinheiro. Para se ter uma percepção, uma pesquisa no Google devolve alguns exemplos que permitem contextualizar esta grandeza (para simplificação, asumo 1€ = $1USD). Ver por exemplo este, este e este. Ou visto ainda de outra forma, 15 milhões de euros daria para manter uma equipa de cerca 100 pessoas a trabalhar durante dois anos, com cada membro da equipa a "ganhar" 6000 € brutos (valor bruto para incluir impostos, despesas de funcionamento e lucro do negócio). É mesmo um negócio-lotaria.

Assim sendo, supõe-se que as melhores práticas tenham sido usadas para definir os objectivos do projecto, minimizando desde logo o primeiro dos riscos da lista anterior. Supõem-se igualmente que se tenha elaborado um bom caderno de encargos e que este tenha sido apresentado a concurso, para assim se obter a melhor solução.

Não. Nada disto foi feito. Pelo contrário, o governo decidiu gastar uma página A4 do Diário da República em propaganda e optou por entregar um projecto desta dimensão por ajuste directo. Quais são os objectivos do projecto? Não se sabe. Apenas foi publicada uma descrição lacónica.

A questão aqui está em não ser possível saber se 15 milhões de euros é muito ou pouco para o projecto em causa. Pela simples razão de não se saber o que se pretende construir. Apenas sabemos, como vimos,que um valor desta grandeza corresponde um projecto de dimensão considerável. Portanto, ainda mais espanta a leviandade na sua definição.

Mais surpreendente do que o valor do projecto de desenvolvimento são os 15 milhões de euros para gastar durante quatro anos em manutenção operacional do projecto. São mais de 10 mil euros por dia durante quatro anos (365 dias por ano). Que produção tão astronómica vai ser então produzida diariamente? Não se sabe.

Na anterior lista de riscos que levam os projectos a falhar hão-de ser poucos os que não se venha a concretizar neste projecto. É isto a visão Simplex deste governo?


Artigo: Why Software Fails

Why Software Fails
By Robert N. Charette
First Published September 2005

We waste billions of dollars each year on entirely preventable mistakes
Have you heard the one about the disappearing warehouse? One day, it vanished—not from physical view, but from the watchful eyes of a well-known retailer's automated distribution system. A software glitch had somehow erased the warehouse's existence, so that goods destined for the warehouse were rerouted elsewhere, while goods at the warehouse languished. Because the company was in financial trouble and had been shuttering other warehouses to save money, the employees at the "missing" warehouse kept quiet. For three years, nothing arrived or left. Employees were still getting their paychecks, however, because a different computer system handled the payroll. When the software glitch finally came to light, the merchandise in the warehouse was sold off, and upper management told employees to say nothing about the episode.

This story has been floating around the information technology industry for 20-some years. It's probably apocryphal, but for those of us in the business, it's entirely plausible. Why? Because episodes like this happen all the time. Last October, for instance, the giant British food retailer J Sainsbury PLC had to write off its US $526 million investment in an automated supply-chain management system. It seems that merchandise was stuck in the company's depots and warehouses and was not getting through to many of its stores. Sainsbury was forced to hire about 3000 additional clerks to stock its shelves manually [see photo, "Market Crash"]

This is only one of the latest in a long, dismal history of IT projects gone awry [see table, "Software Hall of Shame" for other notable fiascoes]. Most IT experts agree that such failures occur far more often than they should. What's more, the failures are universally unprejudiced: they happen in every country; to large companies and small; in commercial, nonprofit, and governmental organizations; and without regard to status or reputation. The business and societal costs of these failures—in terms of wasted taxpayer and shareholder dollars as well as investments that can't be made—are now well into the billions of dollars a year.

The problem only gets worse as IT grows ubiquitous. This year, organizations and governments will spend an estimated $1 trillion on IT hardware, software, and services worldwide. Of the IT projects that are initiated, from 5 to 15 percent will be abandoned before or shortly after delivery as hopelessly inadequate. Many others will arrive late and over budget or require massive reworking. Few IT projects, in other words, truly succeed.

The biggest tragedy is that software failure is for the most part predictable and avoidable. Unfortunately, most organizations don't see preventing failure as an urgent matter, even though that view risks harming the organization and maybe even destroying it. Understanding why this attitude persists is not just an academic exercise; it has tremendous implications for business and society.

SOFTWARE IS EVERYWHERE. It's what lets us get cash from an ATM, make a phone call, and drive our cars. A typical cellphone now contains 2 million lines of software code; by 2010 it will likely have 10 times as many. General Motors Corp. estimates that by then its cars will each have 100 million lines of code.

The average company spends about 4 to 5 percent of revenue on information technology, with those that are highly IT dependent—such as financial and telecommunications companies—spending more than 10 percent on it. In other words, IT is now one of the largest corporate expenses outside employee costs. Much of that money goes into hardware and software upgrades, software license fees, and so forth, but a big chunk is for new software projects meant to create a better future for the organization and its customers.

Governments, too, are big consumers of software. In 2003, the United Kingdom had more than 100 major government IT projects under way that totaled $20.3 billion. In 2004, the U.S. government cataloged 1200 civilian IT projects costing more than $60 billion, plus another $16 billion for military software.

Any one of these projects can cost over $1 billion. To take two current examples, the computer modernization effort at the U.S. Department of Veterans Affairs is projected to run $3.5 billion, while automating the health records of the UK's National Health Service is likely to cost more than $14.3 billion for development and another $50.8 billion for deployment.

Such megasoftware projects, once rare, are now much more common, as smaller IT operations are joined into "systems of systems." Air traffic control is a prime example, because it relies on connections among dozens of networks that provide communications, weather, navigation, and other data. But the trick of integration has stymied many an IT developer, to the point where academic researchers increasingly believe that computer science itself may need to be rethought in light of these massively complex systems.

When a project fails, it jeopardizes an organization's prospects. If the failure is large enough, it can steal the company's entire future. In one stellar meltdown, a poorly implemented resource planning system led FoxMeyer Drug Co., a $5 billion wholesale drug distribution company in Carrollton, Texas, to plummet into bankruptcy in 1996.

IT failure in government can imperil national security, as the FBI's Virtual Case File debacle has shown. The $170 million VCF system, a searchable database intended to allow agents to "connect the dots" and follow up on disparate pieces of intelligence, instead ended five months ago without any system's being deployed [see "Who Killed the Virtual Case File?" in this issue].

IT failures can also stunt economic growth and quality of life. Back in 1981, the U.S. Federal Aviation Administration began looking into upgrading its antiquated air-traffic-control system, but the effort to build a replacement soon became riddled with problems [see photo, "Air Jam"]. By 1994, when the agency finally gave up on the project, the predicted cost had tripled, more than $2.6 billion had been spent, and the expected delivery date had slipped by several years. Every airplane passenger who is delayed because of gridlocked skyways still feels this cancellation; the cumulative economic impact of all those delays on just the U.S. airlines (never mind the passengers) approaches $50 billion.

Worldwide, it's hard to say how many software projects fail or how much money is wasted as a result. If you define failure as the total abandonment of a project before or shortly after it is delivered, and if you accept a conservative failure rate of 5 percent, then billions of dollars are wasted each year on bad software.

For example, in 2004, the U.S. government spent $60 billion on software (not counting the embedded software in weapons systems); a 5 percent failure rate means $3 billion was probably wasted. However, after several decades as an IT consultant, I am convinced that the failure rate is 15 to 20 percent for projects that have budgets of $10 million or more. Looking at the total investment in new software projects—both government and corporate—over the last five years, I estimate that project failures have likely cost the U.S. economy at least $25 billion and maybe as much as $75 billion.

Of course, that $75 billion doesn't reflect projects that exceed their budgets—which most projects do. Nor does it reflect projects delivered late—which the majority are. It also fails to account for the opportunity costs of having to start over once a project is abandoned or the costs of bug-ridden systems that have to be repeatedly reworked.

Then, too, there's the cost of litigation from irate customers suing suppliers for poorly implemented systems. When you add up all these extra costs, the yearly tab for failed and troubled software conservatively runs somewhere from $60 billion to $70 billion in the United States alone. For that money, you could launch the space shuttle 100 times, build and deploy the entire 24-satellite Global Positioning System, and develop the Boeing 777 from scratch—and still have a few billion left over.

Why do projects fail so often »

Among the most common factors:

* Unrealistic or unarticulated project goals
* Inaccurate estimates of needed resources
* Badly defined system requirements
* Poor reporting of the project's status
* Unmanaged risks
* Poor communication among customers, developers, and users
* Use of immature technology
* Inability to handle the project's complexity
* Sloppy development practices
* Poor project management
* Stakeholder politics
* Commercial pressures

Of course, IT projects rarely fail for just one or two reasons. The FBI's VCF project suffered from many of the problems listed above. Most failures, in fact, can be traced to a combination of technical, project management, and business decisions. Each dimension interacts with the others in complicated ways that exacerbate project risks and problems and increase the likelihood of failure.

Consider a simple software chore: a purchasing system that automates the ordering, billing, and shipping of parts, so that a salesperson can input a customer's order, have it automatically checked against pricing and contract requirements, and arrange to have the parts and invoice sent to the customer from the warehouse.

The requirements for the system specify four basic steps. First, there's the sales process, which creates a bill of sale. That bill is then sent through a legal process, which reviews the contractual terms and conditions of the potential sale and approves them. Third in line is the provision process, which sends out the parts contracted for, followed by the finance process, which sends out an invoice.

Let's say that as the first process, for sales, is being written, the programmers treat every order as if it were placed in the company's main location, even though the company has branches in several states and countries. That mistake, in turn, affects how tax is calculated, what kind of contract is issued, and so on.

The sooner the omission is detected and corrected, the better. It's kind of like knitting a sweater. If you spot a missed stitch right after you make it, you can simply unravel a bit of yarn and move on. But if you don't catch the mistake until the end, you may need to unravel the whole sweater just to redo that one stitch.

If the software coders don't catch their omission until final system testing—or worse, until after the system has been rolled out—the costs incurred to correct the error will likely be many times greater than if they'd caught the mistake while they were still working on the initial sales process.

And unlike a missed stitch in a sweater, this problem is much harder to pinpoint; the programmers will see only that errors are appearing, and these might have several causes. Even after the original error is corrected, they'll need to change other calculations and documentation and then retest every step.

In fact, studies have shown that software specialists spend about 40 to 50 percent of their time on avoidable rework rather than on what they call value-added work, which is basically work that's done right the first time. Once a piece of software makes it into the field, the cost of fixing an error can be 100 times as high as it would have been during the development stage.

If errors abound, then rework can start to swamp a project, like a dinghy in a storm. What's worse, attempts to fix an error often introduce new ones. It's like you're bailing out that dinghy, but you're also creating leaks. If too many errors are produced, the cost and time needed to complete the system become so great that going on doesn't make sense.

In the simplest terms, an IT project usually fails when the rework exceeds the value-added work that's been budgeted for. This is what happened to Sydney Water Corp., the largest water provider in Australia, when it attempted to introduce an automated customer information and billing system in 2002 [see box, "Case Study #2"]. According to an investigation by the Australian Auditor General, among the factors that doomed the project were inadequate planning and specifications, which in turn led to numerous change requests and significant added costs and delays. Sydney Water aborted the project midway, after spending AU $61 million (US $33.2 million).

All of which leads us to the obvious question: why do so many errors occur?

Software project failures have a lot in common with airplane crashes. Just as pilots never intend to crash, software developers don't aim to fail. When a commercial plane crashes, investigators look at many factors, such as the weather, maintenance records, the pilot's disposition and training, and cultural factors within the airline. Similarly, we need to look at the business environment, technical management, project management, and organizational culture to get to the roots of software failures.

Chief among the business factors are competition and the need to cut costs. Increasingly, senior managers expect IT departments to do more with less and do it faster than before; they view software projects not as investments but as pure costs that must be controlled.

Political exigencies can also wreak havoc on an IT project's schedule, cost, and quality. When Denver International Airport attempted to roll out its automated baggage-handling system, state and local political leaders held the project to one unrealistic schedule after another. The failure to deliver the system on time delayed the 1995 opening of the airport (then the largest in the United States), which compounded the financial impact manyfold.

Even after the system was completed, it never worked reliably: it chewed up baggage, and the carts used to shuttle luggage around frequently derailed. Eventually, United Airlines, the airport's main tenant, sued the system contractor, and the episode became a testament to the dangers of political expediency.

A lack of upper-management support can also damn an IT undertaking. This runs the gamut from failing to allocate enough money and manpower to not clearly establishing the IT project's relationship to the organization's business. In 2000, retailer Kmart Corp., in Troy, Mich., launched a $1.4 billion IT modernization effort aimed at linking its sales, marketing, supply, and logistics systems, to better compete with rival Wal-Mart Corp., in Bentonville, Ark. Wal-Mart proved too formidable, though, and 18 months later, cash-strapped Kmart cut back on modernization, writing off the $130 million it had already invested in IT. Four months later, it declared bankruptcy; the company continues to struggle today.

Frequently, IT project managers eager to get funded resort to a form of liar's poker, overpromising what their project will do, how much it will cost, and when it will be completed. Many, if not most, software projects start off with budgets that are too small. When that happens, the developers have to make up for the shortfall somehow, typically by trying to increase productivity, reducing the scope of the effort, or taking risky shortcuts in the review and testing phases. These all increase the likelihood of error and, ultimately, failure.

A state-of-the-art travel reservation system spearheaded by a consortium of Budget Rent-A-Car, Hilton Hotels, Marriott, and AMR, the parent of American Airlines, is a case in point. In 1992, three and a half years and $165 million into the project, the group abandoned it, citing two main reasons: an overly optimistic development schedule and an underestimation of the technical difficulties involved. This was the same group that had earlier built the hugely successful Sabre reservation system, proving that past performance is no guarantee of future results.

 

Inicialmente publicado a 21-05-2009

Nota: com este texto termina a série de republicações.

Microsoft Outlook Social Connector

Posted by Mr.Editor Thursday, February 18, 2010 1 comments

Depois do estrondo que foi o Google Buzz, é a vez da Microsoft apanhar o comboio, com o  Microsoft Outlook 2010.

image

Detalhes no blog do Miscroft Outlook: Announcing the Outlook Social Connector.

O Linkedin, rede social mais vocacionada para o mercado de emprego, já disponibiliza no seu site um conector para esta nova funcionalidade.

9 anos de Google

Posted by Mr.Editor Wednesday, February 17, 2010 1 comments

Ora aí está, 9 anos a googlar. Entre nós não é costume mas no inglês da América é prática corrente construir novos verbos a partir de substantivos (É curioso, não seria capaz de dizer isto usando a TLEBS!) Assim, no contexto anglo-saxónico, "to google" é equivalente a pesquisar. Até existe uma entrada na Wikipedia sobre este assunto: link.

Estava na faculdade quando isto do serviço Internet www começou a fervilhar entre a comunidade universitária. "Ir à net" era então uma expressão inexistente, mas existindo significaria ir até ao laboratório da faculdade, utilizar uma estação de trabalho Sun e, usando o Netscape, e consultar uma ou outra página nova, já que as existentes raramente eram actualizadas.

 

Recordo-me perfeitamente dum colega que tinha encontrado a página pessoal - era assim que eram chamadas as páginas web - onde o autor mantinha uma lista de endereços favoritos. Na altura já ia em centenas! Estas páginas terão sido os primeiros portais. Alguém lhes antecipava a importância que viriam a ter? Eu não, infelizmente. No entanto houve que visse esse potencial e apostasse forte. Yahoo, Infoseek, Altavista e vários outros foram desbravando o caminho. O Google foi dos últimos a chegar e triunfou graças à velocidade com que devolvia os resultados e à capacidade em apresentar resultados relevantes, com o algoritmo PageRank (link).

Curiosamente, foi também graças a este algoritmo que sugiram as chamadas Google Bombs (link). Consiste em diferentes páginas web conterem uma frase igual a apontar para um mesmo endereço. Na campanha eleitoral de Bush em 2004, o termo "miserable failure" foi usando sistematicamente como texto do link para página da biografia de GWBush. Assim, uma pesquisa com estes termos enganava o algoritmo do Google e uma página não relacionada aparecia no topo dos resultados.

 

Nove anos depois, esta empresa continua a inovar em grande escala e, neste momento, é o rival da Microsoft, com reais possibilidades de lhe fazer o que ela havia feito à IBM: destrona-la do lugar de nº 1 da informática mundial. E o infortúnio, para a Microsoft, acontece pela mudança de paradigma. Tal como o mainframe, o servidor central, foi destronado graças ao computador pessoal, também o conceito "um sistema operativo+um conjunto de aplicações=um utilizador", ganha pão da Microsoft, se tornará obsoleto perante o reaparecer do conceito dum servidor central e múltiplos clientes a ele ligados. Isto está a tornar-se possível graças ao ADSL e as suas crescentes velocidades de ligação.

A Microsoft, claro, não está a dormir e já lançou o serviço Windows Live, cópia chapada dos serviços do Google. Mas este sinal é precisamente o pronuncio do fim, em que o líder passa a seguir a concorrência, em vez do contrário. Vamos ver se, como no Triunfo dos Porcos, apenas se troca um dominador por outro, até tendo este a particularidade de querer saber tudo sobre todos.

Publicado inicialmente a 27-09-2007

Google: "Do no evil", é mesmo assim?

Posted by Mr.Editor Tuesday, February 16, 2010 2 comments

Google buzz

A Google decidiu entrar na senda das redes sociais. Fê-lo com o Google Buzz, passando por cima de todas as questões de privacidade dos seus utilizadores. Comecei por usar o Buzz no Fliscorno, para saber do que se trata e por o email do blogue não estar associado ao email pessoal. Sinceramente, é algo que não quero na minha conta pessoal. Vejo-lhe utilidade para a divulgação de conteúdos, tal como o Facebook ou o Twitter mas há uma diferença de nota: neste dois últimos, a adesão não é imposta. Pior, quem aderir ao Facebook ou ao Twitter sabe que o está a fazer numa rede onde as coisas serão públicas. Já com o Buzz, as pessoas aderiram com um certo nível de privacidade e este foi drasticamente alterado com a adição do Buzz. De um momento para o outro, passo a saber com quem outras pessoas trocam emails mais frequentemente, sem que elas tenham tido uma palavra no assunto. "Do no evil" é o slogam da Google. Certo...

 

Mas nestas coisas, cada qual sabe de si. Se o novo serviço da Google lhe agrada, tudo bem. Se não, segue-se um artigo que explica como desactivar o serviço. Num caso como noutro, convém é que saiba ao que está ;-)

February 11, 2010 10:01 AM PST

Buzz off: Disabling Google Buzz

by Jessica Dolcourt

 

Updated: February 11, 2010 at 12:15 p.m. PT to share a new rollout that Google implemented to better manage (and block) contacts. Also added a note about profile privacy.

Google Buzz logo

My colleague Molly Wood called it a privacy nightmare, but to many, Google's new social-networking tool Buzz is at its root an unwanted, unasked for pest. The way some of us see it, we didn't opt in to some newfangled Twitter system and we don't particularly want to see updates from contacts we never asked to follow creep up in our Buzz in-box. Call us what you will, but for curmudgeonly types like us, Buzz isn't so much social networking as it is socially awkward networking. We tried it, we didn't like it, and now it has to go.

Here's how we silenced Buzz from the desktop:

Step 0: Don't disable Buzz--yet

The automatic reaction is to scroll to the very bottom of Gmail and click the words "turn off buzz." But all this does is remove active links, leaving your profile still publicly available, along with any public buzzes you might have made while trying Buzz out. In fact, you're still technically following people, and they're following you. Not OK.

Buzz profile

Disabling Buzz isn't enough. My previous buzzes are still visible to anyone looking for them.

(Credit: Screenshot by Jessica Dolcourt/CNET)

Step 1: Purge your profile

One way to find your profile is to go to http://www.google.com/profiles and search on your name. Next, permanently delete buzzes in the public timeline by clicking the "Delete" tag. Then get to work unfollowing those that Google has "helped" you automatically follow.

Unfollow people on Buzz

From your profile, (1) click the hyperlink first and then (2) manually unfollow individuals.

(Credit: Screenshot by Jessica Dolcourt/CNET)

However, it's as if the Buzz team never envisioned anyone would want to completely opt out. You'll need to unfollow individuals one by one, which takes some time if Google subscribed you to a long list of followers. Despite what it said in our profile, we had to keep loading pages to unfollow a big chunk of friends.

Also take a moment to make sure that your profile isn't broadcasting anything you don't want it to. Click the "Edit Profile" link to the right of "Contacts" and "About me" to give your profile a once-over.

Note: If your profile was never public (and if you never experimented with Buzz), you'll have fewer privacy concerns. However, if you are getting rid of Buzz, it's a good idea to scan your profile to make sure you're not exposed on anyone's automatic list of followers.

Step 2: Block your followers

If you're serious about removing traces of yourself from Buzz's public record, you'll need to make sure you're invisible to others as well. Go back to Buzz in Gmail (if you already disabled it, you can turn Buzz back on at the bottom of the page to complete this step.) In the absence of an obvious "block all" button, we manually blocked each individual by clicking their picture from the list of followers and then selecting "Block."

Blocking people on Buzz

Blocking: Another option.

(Credit: Screenshot by Jessica Dolcourt/CNET)

At noon PT on Thursday, we noticed that Google rolled out a better interface that includes some management tools you can use to more easily block users. Prior to that, we noticed a few leftovers that were still visible in our public profile because we weren't previously able to access their profile tab. Thanks to Google's tweak, we unblocked them in a hurry.

Blocking someone won't alert them and you can always unblock them later if you change your mind about Buzz.

Better blocking in Buzz

A Thursday tweak adds drop-down tools to better manage followers.

(Credit: Screenshot by Jessica Dolcourt/CNET)

Step 3: Disable Buzz in Gmail

Now it's safe to disable Buzz in Gmail, thus removing the offending links and updates from your eyes.

Disable Gmail Buzz

Last step: Unplug Buzz in Gmail

(Credit: Screenshot by Jessica Dolcourt/CNET)

This worked for us, but leave your own tips and travails in the comments.

Related stories:
Rafe and Josh debate Google's Buzz
Google Buzz: Privacy nightmare
Google's social side hopes to catch some Buzz

Followers