Here is my 2 cents.
I have paid my dues writing CRUD apps for business. They all want the same thing, something that keeps track of entities and controls how the organization interacts with those entities.
In one year, for instance, I worked on systems for an academic department and a logistics company. The academic department needed a custom CRM system to handle students through the lifecycle of prospect to applicant to student to alumni. The logistics company had assets all over the place that they were contracting with vendors to do move these assets around and do various things with them. They sent out invoices to customers and payments to vendors and all that.
I could think up several more examples but if we looked at a number of business systems we see so many common elements that it seems like you ought to be able to write the schema, add a few business rules, and there is your application. Of course vendors have been promising us "4th Generation Languages" since before the AI Winter, and Ruby on Rails and it's descendants have realized many of their claims, but still building and maintaining these systems means so much messing with details that it seems there has got to be a better way.
This shared system would be probably an "upper" ontology because it comes down to a 4D model of people and assets (you bet you need to know when the last time assert 774Q8 was in Milwaukee), business transactions and that stuff.
I'd like to see 1000 flowers bloom around this schema but for this to succeed there really has to be one vendor who builds a system from soup to nuts that really "rocks" people.
There's an overlap between the world of Wikidata and the "business app" domain described above in the sense that maybe some of your customers have Wikidata id's or your locations correspond to things in Wikidata, etc, plus similarities in schema.
-----Original Message----- From: Jane Darnell Sent: Monday, July 8, 2013 1:13 PM To: Discussion list for the Wikidata project. Subject: Re: [Wikidata-l] Accelerating software innovation with Wikidata and improved Wikicode
I am all for a "dictionary of code snippets", but as with all dictionaries, you need a way to group them, either by alphabetical order or "birth date". It sounds like you have an idea how to group those code samples, so why don't you share it? I would love to build my own "pipeline" from a series of algorithms that someone else published for me to reuse. I am also for more sharing of datacentric programs, but where would the data be stored? Wikidata is for data that can be used by Wikipedia, not by other projects, though maybe someday we will find the need to put actual weather measurements in Wikidata for some oddball Wikisource project tp do with the history of global warming or something like that.
I just don't quite see how your idea would translate in the Wiki(p/m)edia world into a project that could be indexed.
But then I never felt the need for "high-fidelity simulations of virtual worlds" either.
2013/7/6, Michael Hale hale.michael.jr@live.com:
I have been pondering this for some time, and I would like some feedback. I figure there are many programmers on this list, but I think others might find it interesting as well. Are you satisfied with our progress in increasing software sophistication as compared to, say, increasing the size of datacenters? Personally, I think there is still too much "reinventing the wheel" going on, and the best way to get to software that is complex enough to do things like high-fidelity simulations of virtual worlds is to essentially crowd-source the translation of Wikipedia into code. The existing structure of the Wikipedia articles would serve as a scaffold for a large, consistently designed, open-source software library. Then, whether I was making software for weather prediction and I needed code to slowly simulate physically accurate clouds or I was making a game and I needed code to quickly draw stylized clouds I could just go to the article for clouds, click on C++ (or whatever programming language is appropriate) and then find some useful chunks of code. Every article could link to useful algorithms, data structures, and interface designs that are relevant to the subject of the article. You could also find data-centric programs too. Like, maybe a JavaScript weather statistics browser and visualizer that accesses Wikidata. The big advantage would be that constraining the design of the library to the structure of Wikipedia would handle the encapsulation and modularity aspects of the software engineering so that the components could improve independently. Creating a simulation or visualization where you zoom in from a whole cloud to see its constituent microscopic particles is certainly doable right now, but it would be a lot easier with a function library like this. If you look at the existing Wikicode and Rosetta Code the code samples are small and isolated. They will show, for example, how to open a file in 10 different languages. However, the search engines already do a great job of helping us find those types of code samples across blog posts of people who have had to do that specific task before. However, a problem that I run into frequently that the search engines don't help me solve is if I read a nanoelectronics paper and I want to do a simulation of the physical system they describe I often have to go to the websites of several different professors and do a fair bit of manual work to assemble their different programs into a pipeline, and then the result of my hacking is not easy to expand to new scenarios. We've made enough progress on Wikipedia that I can often just click on a couple of articles to get an understanding of the paper, but if I want to experiment with the ideas in a software context I have to do a lot of scavenging and gluing. I'm not yet convinced that this could work. Maybe Wikipedia works so well because the internet reached a point where there was so much redundant knowledge listed in many places that there was immense social and economic pressure to utilize knowledgeable people to summarize it in a free encyclopedia. Maybe the total amount of software that has been written is still too small, there are still too few programmers, and it's still too difficult compared to writing natural languages for the crowdsourcing dynamics to work. There have been a lot of successful open-source software projects of course, but most of them are focused on creating software for a specific task instead of library components that cover all of the knowledge in the encyclopedia.
_______________________________________________ Wikidata-l mailing list Wikidata-l@lists.wikimedia.org https://lists.wikimedia.org/mailman/listinfo/wikidata-l