In my last post on the last Digital Cities Challenge Academy in Brussels, I highlighted the relevancy of governance structures for digital transformation strategy emphasising that we have in front of us a clear “digital governance divide”. There is another evidence of that divide called digital transformation strategy monitoring metrics. There are two levels of monitoring. There is the level close to the technology. A city deploys a smart parking solution. That solution produces data. The data has two main uses. The first one is to enable service creation. The second one is to monitor the solution in order to assess patterns, behaviour and enable prediction of events. If we do not do monitoring, we have plenty of data floating in the void and we are just wasting money! There is then the strategic level of monitoring: creating a measurement framework that enables the monitoring of the strategy implementation. That is crucial for assessing directions and for creating the culture of an evidence-based policy making process. The Digital Cities Challenge proposes such as a framework. It is a fantastic invitation to local authorities to think politically and operationally with data and data analysis in their hands. But, the municipalities I saw in Brussels are moving at different speeds. Sometimes, the paradox is that we are asking municipalities to embrace open data models, when, clearly, basic indicators that could assess the digital maturity of the city either do not exist, or they are hidden in some very dark cave. In the latter, the key of the cave is lost or the cave is just pitch black. And so the next course for European municipalities, particularly small-medium sized ones, is about monitoring, and monitoring, and monitoring! Please, let’s not tell them that Barcelona, London, Amsterdam, Milan and other big cities are doing it. It will just increase their frustration and their distance between you and them. And, the European Single Digital Space will become increasingly fragmented and full of pitch black caves!
The Vision and Ambition Academy of the EU-funded Digital Cities Challenges project saw the gathering of almost 40 municipalities from all over Europe with different geographic characteristics, socio-economic features, and different levels of adoption of digital technologies in their business communities, among their citizens, and in their administration practices. There were many interesting themes, but, there was an overarching concept that was of great interest for many of those municipalities: the governance of digital transformation policies. That represents the necessary backbone of any digital transformation strategy, the necessary structure that enables digital transformation. The cities gathered at the event showed different levels of maturity in relation to the governance. Unfortunately, the gap is quite wide among them moving from cities in their very early stage of understanding the need of a digital transformation governance structure to cities with a formal structure with a digital paradigm flowing all along the various departments of the municipality. There were cities with a formal and open structure, open towards other stakeholders of the city, inclusive in the decision making process, with a clear enabling strategy in place in terms of objectives and steps and resources needed to achieve those objectives. Among those “mature” cities, different models of governance were proposed. There were examples that really impressed the participants of the event. However, the “digital transformation governance divide” between cities was evident and something to be worried about. The European single digital space can really be single if that divide is bridged. Municipalities unware of their digital transformation mission affect that space and citizens and organizations’ opportunities living in that space. Other forms of digital divide are strongly affected by the “digital transformation governance divide.” But, developing a digital transformation governance is not an easy task. It requires a “digital framework” and a “digital frame of mind”. Building those requires a plan and the patience, the perseverance and the political will of pursuing that plan. The “mature” cities can help mentoring the others through a specific “support programme”. That “support programme” should come with a set of guidelines and best practices on designing, building, and sustaining a digital transformation governance structure. The Digital Cities Challenge project could design that programme creating a working group on the topic leveraging the various cities and experts part of the Digital Cities Challenge community.
The digital transformation of cities and communities can be successful if the process is open to citizens and organizations’ contributions. That can be enabled through open model of innovation and engagement. Those are largely based on open data model, but, also open and inclusive coding is becoming important for openness and inclusivity. The Foundation For Public Code, coming from the very notable open experience of smart city in Amsterdam, has been established for nourishing and diffusing public code practices and experiences. We have discussed public code and the Foundation with one of his founder, Boris van Hoytema.
Boris van Hoytema: The Foundation For Public Code is meant to provide support and advocate for those that develop Public Code, software and policy for the future that is built to be inclusive, usable, adaptive, open and sustainable.
The advent of computers in government leads to the digitisation of policy. Where code for cities used to be written down and executed by humans, we’re seeing that code gets executed by machines more and more. In a world where more and more of the burden to solve the big issues are moving to cities this automation means that public institutions are able to provide significantly better services to their citizens and solve a lot of the tougher issues like climate change response and privacy in the digital age, whilst also introducing a new set of challenges.
We need to build our digital governments with the same loving care as we apply now to policy making without losing out on the agility necessary for the digital adaptation. Developing for businesses is fundamentally different than developing for public institutions. By sharing the components we use to build our solutions we can develop and fix issues more quickly whilst driving down public costs as well.
Saverio Romeo: Which objectives does the Foundation have?
Boris van Hoytema: The mission of the Foundation For Public Code is to create a viable future for cities and civic operating systems that are highly participatory and drive societal engagement. A public digital infrastructure that is inclusive, usable, adaptive, open and sustainable.
First of we are working currently to make policy and technology people aware of the concept and importance of Public Code. Our aim with the Foundation For Public Code is to provide an ‘ecosystem level’ partner for the development of Public Code codebases, codebases that are both policy and source code.Right now we see this as being a third party ‘codebase stewardship‘ partner that can provide a place for (co)development whilst continuously enforcing the standards necessary for the development of Public Code and helping every new piece of code contribute to an international ecosystem. We see this a bit like the Linux Foundation, OW2 or Apache, but for Public Code codebases. In this we’re trying to offer a service that will make any development team better while also helping them make more replicable code.
Saverio Romeo: What is the project Smart Cities? Public Code! about?
Boris van Hoytema: We recognise that cities are going through the transformation that comes with automation. The ‘Smart Cities? Public Code!’ project is meant to build the discourse around Public Code as part of the Smart City conversation. It aims to connect the notion of ‘code is code’ to this transformation, find out what technological and institutional change is necessary and create practical tools for those in the thick of this transformation to communicate and develop.
Furthermore, we’re hoping to connect to a network of partners we can work with in order to usher in a new era for inclusive and effective governments. If you want to connect or contribute, you can find out more about the project at smartcities.publiccode.net
Saverio Romeo: How can organisation (and which type) be part of the Foundation for Public Code?
Boris van Hoytema: First of all, we invite everyone that has an opinion or mission to partake in the formulation of and discussions around what the Foundation For Public Code should be, we’re treating it as an Open Source project, and thus we have a CONTRIBUTING page that sketches out what you could do. This is still new for us so it is all a bit scary, but we would love all kinds of contributions.
Next to that is solving our challenges. The main challenge we face now with Codebase Stewardship is providing stability. Ideally, codebases that are in the care of the Foundation For Public Code are guaranteed to have some level of stewardship for an extended period of time. This needs to be paid for. We’re currently trying to figure out how to make that work and are looking for partners in this process.
I would ask organisations that want to support the Foundation For Public Code to partake in the formational discussions and solving our challenges, perhaps even by allocating some time for people that can contribute on their budgets to help.
Next to that, we’re looking at building a ‘membership’ like structure, however, we’re still in the process of defining what that might mean and what the impacts of different structures can have on the long-term sustainability of our projects and the public interest.
The exploration of a digital transformation strategy for L’Aquila continues with the support of the EU Digital Cities Challenge initiative. Tomorrow, there will be a seminar for discussing the ambition a city like L’Aquila can have when looking at digital technologies as an enabler of innovation and economic growth. It is a very difficult question to answer for a small-sized city located 1 hour and half by car from a large city like Rome. That difficulty also depends on a cultural factor. The concepts of innovation and periphery are not necessarily linked to each other. Innovation is seen as a dynamic, hectic, disruptive and creative concept. The periphery is seen as a static, status quo enabler concept. Despite endless debates on the decentralisation of the innovation process, innovation strongly remains a matter for large cities and great urban areas. They are the engine of innovation and economic growth. The periphery is where that engine comes and rest during the weekend! There are exceptions to this, but, generally, this difference is part of our view of innovation, cities, and periphery. That view is so strong that when debating where the economic future of a city like L’Aquila stands, the response immediately goes to tourism. Tourism 4.0 for smart city to rest! But, there are few warriors out there not giving up on this binary vision. L’Aquila is trying to see more in the use of digital technologies. Tourism is important, but, there could be more. The question is not much about what that more is, but how to enable a cultural shift in the city and design, manage and monitor a digital transformation strategy that can have a wider impact in terms of innovation and entrepreneurship. Tomorrow, we will discuss this. We will discuss how to make innovation while the smart city comes to ski on the slopes of Gran Sasso!
There is something intriguing in the idea of convergence of the three big terms: the Internet of Things, blockchain, and AI. Lawrence Lundy from Venture Outliers illustrated the concept briefly during a panel at last Blockchain Expo in London. Xsure.io, a start-up aiming to bring blockchain-based innovation in the insurance industry, offered that view as based of their approach. See the picture below from Xsure.io. And, there is a lot of literature looking at the intersection of the three groups of technologies. The overall idea is to see the Internet of Things as the data creation layer, the blockchain as the data exchange and communication layer, and advanced analytics (AI above all) as the data utilisation layer. This approach will lead us to applications data rich, secure, anonymous, and autonomous. Certainly, not all the IoT applications require such combinations, not all of them require blockchain, or autonomous behaviour, but the potential is tremendous. There are also a number of serious challenges to face. My research is looking into this direction exploring use cases in order to understand the feasibility of blockchain-IoT-AI based projects, the operational issues of those projects, and the challenges to face and the benefits. If you are involved in such a project, I would like to talk with you. Therefore, please, contact me.
My experience in Beecham Research is coming to an end. The decision lies in a desired moment of reflections and to explore other angles of the IoT world. I have been an analyst for 11 years. Sometimes, I have thought that my work was too much theoretical. Other people in Beecham used to take that theory and make it applicable to business problems. During those moments, I felt the need of a direct touch with businesses, seeing with eyes what that theory was used for. Therefore, my next step will primarily revolve around experiencing that. But, the decision of leaving is not only dictated by that. As I said, it was a good time for reflections, reflections on the IoT space (once was M2M), which I covered for some time now and on the role of analysts.
On the IoT side, the reflection lies on the opinion that the IoT vision can have a truly impact on problems we face as human race. But, unless the problem has a revenue generation angle, the industry has struggled to put the right attention to it. The justification revolves around satisfying stakeholders, which, holds, but, recently, not much with me. I see the value of technological creativity when it leads at solving problems for us all. During these years, I have seen moves aiming at revenue generation only forcing products and services that were not really demanded or needed. And, I am aware that this is a very personal sentiment. But, that sentiment leads me to look at areas of research that explores the role of IoT vision for facings human challenges. The UN Sustainable Development Goals are an interesting way of seeing that and I am glad that organisations like GSMA or events like the IoT Week have embraced those strongly. But, the all topic needs more exploration. The various approaches aiming at “democratising everything” sometimes they appear to be just marketing hype. Making the IoT a vision for all is far away from us. The directions are dictated by where the flow of money comes from and goes to. Diverting those directions, or even taking some parallel ones, seems to be difficult. In that, which is the role of the analyst? If the analyst is an influencer, how can he or she influence the directions of development of the IoT?
That million dollar question leads me to a reflection on the role of the analyst in the IoT world and, more generally, in the technology markets. Who is an analyst? There is a good portion of the industry that has the view of the analyst as a sort of magician, who takes numbers out of the hat and put them into a Power Point presentation and Excel file. That magician is also a good actor because he or she always has a good way of justifying those numbers. That is a view that has irritated me a lot during those years. Probably, analysts have fed this view a bit with their eternally growing forecasts, but, the irritation is not disappeared. Behind forecasting activities, there is always a forecasting methodology and a research methodology. It is against the interest of the analysts behaving like a magician! I doubt there are analysts thinking to go away with magic numbers. There are weak forecasting approaches and strong ones, for sure, but, there is always reasoning behind forecasts. But, the worst thing is believing that the analyst is a simple producer of forecasts. Forecasting is a part of the exercise. The analyst is someone, who explores knowledge, make sense of it, and create a view on the matter under analysis. The industry players need that view for a variety of reasons, for example because they do not have time to create an informed view on which design their strategy and/or they need other views in order to validate theirs and design their strategy consequently. And, the industry players need views from different analysts because the knowledge creation and analysis exercise is affected by the analyst’s background and expertise. Each analyst produces a valid view. There is not the best view as there is not the best analyst. Stating “to be the best analyst” is simply arrogant because knowledge is so vast for human capacity. Perhaps, the AI-based robot analyst will do better than the human analyst, but, let’s see that after AI hype has set for real considerations.
But, despite my thinking on the nature of the analysts, I still do not have the answer on how the analyst, seen as an influencer, can drive directions of developments. I guess that is difficult because the overall output of the analyst’s work is providing revenue-generating directions. The challenge is then making the IoT community more aware of their capability of challenging human problems, while pursuing their stakeholders’ objectives. Easy to say, very hard to do. But, I think this blog will try to explore that direction. I know it will be an Odyssey and I will never arrive to Ithaca! But, the Greek poet, Cavafy, comforts me.
“Keep Ithaca always in your mind.
Arriving there is what you are destined for.
But do not hurry the journey at all.
Better if it lasts for years,
so you are old by the time you reach the island,
wealthy with all you have gained on the way,
not expecting Ithaca to make you rich.”
Developing an IoT solution is a journey, which destination is not often the ultimate one. The destination becomes a moment of appreciation of the achievements and the instigation for a new journey. The destination is the output of the IoT solution. The appreciation of that output is the result of what the data gathered has revealed for the objectives of the organisation. And, that data is also the instigation for exploring something better or, even, something completely new. At the end of that journey, the traveller has changed. He or she appreciates enormously that change and he or she feels that more can be achieved. But, this concatenation of journeys is not a solitary experience. It requires several tools and those tools are provided by different travelling companions. Those tools are all unique and necessary, but, there is one that enables the travellers to move from one journey to the others. That tool is the IoT platform and its various services. Due to the various services needed, there could be not just one IoT platform companion, but, different of them specialised in different services. Probably, a companion able to integrate the different contributions is also necessary. You are not travelling alone! But, how do we select our fellow travellers? There are many ready for the quest, some of them offering sophisticated specialisation, others offering solutions able to sustain the entire expedition and the needs of that expedition. Probably, the latter will bring friends with them, simply because a solitary journey is difficult for them too. All those promising fellow travellers will interest you because they all present various degree of innovation and valuable capabilities. But, all of them have elements or weaknesses that can be unsuitable during the journey or making the journey a bit more difficult. You cannot perform the optimal choice, but, certainly, the sub-optimal one that will guide you along the journey. But, even the sub-optimal choice requires knowledge of the potential partners and what they offer. The creation of that knowledge is a step-by-step process. Your IoT solution journey requires the exploration of the landscape of IoT platform partners to see who will help you reaching the destination. The first step of this investigation is reducing a landscape of more than 450 potential partners into a short list of candidates. The short list is made of organisations that can support you because they offer the technological tools you are looking for and they have experience in your application domain and sector. A deep technological assessment and business evaluation is then necessary to identify the final candidate or final candidates. In fact, it is important to bear in mind that, even, at the end of the journey, you could still have several companions. That is perfectly fine. That is called ecosystem and without that, developing an IoT solution could become an endless Odyssey. Despite representing the human curiosity and travelling as a way of satisfying that curiosity, also Odysseus at some point wanted to reach his destination. Certainly, adopters cannot allow years of travelling for reaching the final destination just for satisfying curiosity, but, selecting the IoT platform is an important journey to take. It should not be a plug and play exercise. It should require time and thinking because the IoT platform will become the compass of your future IoT journeys.