This has been done in the macroeconomics literature to model bank interconnectedness and financial crises. This literature starts around 2008 or so. I can't give you references since it's not my subfield but it is done. (Daron Acemoglu is one name, but he works on many topics. Maybe Douglas Gale has also worked on this.) Models of contagion in networks among banks are an example.
There is also literature in international trade on trade networks (sourcing components for a final product, for example). Here I don't have names for you.
There is also Matt Jackson at Stanford who has worked on many, many topics in networks. On the empirics (which are very challenging) you may want to look up Bryan Graham and coauthors.
From my limited exposure the work on financial crises and trade, it doesn't seem that interesting but it does exist. The empirical work on networks exposes a lot of challenges. Graham (IIRC) has a recent survey in the Handbook of Econometrics if you'd like to learn more.
Finding the ideal meta-structure is not the core problem. It's getting every company on earth to report the data remotely accurately and in a timely way, then disseminating and analyzing it.
I actually think that we need government to enforce this data collection and it needs to take advantage of some decentralized systems for it to be workable. Primarily because we need "hard" (usable) data about resources, wealth (inequality) and crops etc. in order to have a realistic (and indisputable) view of what's happening. Combining that type of decentralized megastream with advanced cryptocurrency smart contracts could change economics from being a cult to a useful science.
Sounds like you want to build a fully-working and -encapsulated simulation of the universe ;)
Even modeling a single, non-trivial business would probably be exceptionally difficult
I mapped my country's businesses and their owners relationships. Some very interesting things popped up. A private yellow/fake news company with THOUSANDS of owners/shareholders, lots of people with few hundred companies. company "rings" etc. Pretty fun stuff :)
I did a smaller subset of what you mentioned as part in my first attempt at a PhD (since abandoned) around 15 years. What I did was to model relationships between banks with edges denoting interbank loans. Then I computed the eigenvector centrality (think PageRank) to characterize the "risk" taken on by a bank. This was later expanded to include things like mortgages, prime and subprime loans.
The idea was that you could later run simulations and what-if scenarios. Lots of agent-based modeling happening too once we had the structure up and running.
(this was done around the Lehman Brothers period, and there was a lot of interest in these kinds of works in Complexity Sciences).
I did think up some ideas one evening long ago and wrote down a rough note, but I don't think it would have much use for serious predictions (except as a fun toy to play around with to generate ideas)
A mostly un-edited, totally unpolished, and probably erroneous - don't judge too hard :) - version here:
- All goals are to some extent intermediate - a means to achieving a further goal - directed but not acyclic
- Some goals are largely measured by how useful they are achieving others, exemplified by stocks and tokens
- Node values are the measurement of the goal (in what?) and the edge values are the percentage split (like a Sankey diagram) but inclusive of factors less than 0 or greater than 1 (i.e any value) to be added to the value in the node, like y = y + (xf)
- (The x-f relationship could also be exponential - (xf^n) is a better equation)
- Maybe, by measuring the values of x and y empirically over time, we can try to calculate f. f has units to balance out the units of x and y, so no problem with incompatible units
- What are the nodes? Every damn thing that can be measured - prices of everything sold on the market, population statistics like literacy rates, time spent on Khan Academy, anything that can be quantitatively measured (quality of the measurement doesn't matter as each f value is completely independent of other f values)
- And we have a tech tree! You can choose the measurements that you want to optimize for and use it to prioritize your resources towards progress. Can also be used to intelligently guess at the inputs and outputs of progress in a specific goal
- Better for quantifying the current economy and scientifically deploying investment for the near future. The long term is obviously unpredictable (think https://twitter.com/robert_zubrin/status/1278681124944793611), however can be used to analyze changes during previous paradigm shifts with historical data
- This is 99% dependent upon price signals (which I believe will be almost all of the useful data)
Most of the "knowledge graph" world takes an "outside of time" viewpoint. When you get into more dynamic situations like you are talking about (run simulations) you are getting beyond state of the art.
This idea was also documented by Mindey on Infinity Family.
It was called network of functions.
We at mappes.io are doing something similar in industrial domain. Our knowledge graph has few layers
Products (Raw Material <> Application use) Products to company (Supplier <> Buyer) People connected via companies and products
We are already seeing benefits of this in being able to easily discover new connections across products and companies. Our focus is right now on few verticals in manufacturing sector and hope to expand to wider manufacturing space at some point.
Try the work of Wynne Godley and Marc Lavoie, who use stock-flow consistent models to model the economy. See their book: Monetary Economics: An Integrated Approach to Credit, Money, Income, Production and Wealth [1]
[1]https://www.amazon.com/Monetary-Economics-Integrated-Approac...
When politicians are setting national budgets – arguing for a billion here, a billion there – I always wonder what model they use. Is it a spreadsheet? What are they using?
Take a look at Stock-Flow consistent models which incorporate some of what you are looking for.
Agent based economic models were a really hot idea 15-20 years ago. They had interesting properties but to my knowledge nobody ever could calibrate them to generate real-world testable predictions.
If you really want to get exotic peek at what the Cybersyn people were trying to achieve.
This thread may be of interest as somewhat related, with modeling of the energy flow at global scale. https://news.ycombinator.com/item?id=31966435
Basically, a digital twin of the world economy. There are conspiracy theorists who think that this exists and captures all of the major firms and simulates the rest out for small businesses.
Hackernnews needs more diversity:p This problem is studied in industrial ecology literature and complex networks related research.
I'm working on this.DM me if you're interested
saw an article mapped of hundreds and hundreds of thousands of businesses and map them to other companies to see what companies actually ruled the world. There were like 6 or 8 companies that were at the center of all other companies.
I forget exactly the details but it was a cool article.
This could be done. What would be the source of data? Would it be updated in real-time?
LinkedIn?
Improbable might be working on this.
Welcome to complexity economics.
Sylvie and Bruno Lewis Carroll, 1893.
"That's another thing we've learned from your Nation," said Mein Herr, "map-making. But we've carried it much further than you. What do you consider the largest map that would be really useful?"
"About six inches to the mile."
""Only six inches!"exclaimed Mein Herr. "We very soon got to six yards to the mile. Then we tried a hundred yards to the mile. And then came the grandest idea of all! We actually made a map of the country, on the scale of a mile to the mile!"
"Have you used it much?" I enquired.
"It has never been spread out, yet," said Mein Herr: "the farmers objected: they said it would cover the whole country, and shut out the sunlight! So we now use the country itself, as its own map, and I assure you it does nearly as well.