This may be part of indexing, but since this process is manually managed and tied to Webmaster Tools, I think it's useful to think of it as a separate concept. Likewise, Google's disavowal of input is partly controlled by the webmaster in this case. This data must be processed regularly and then fed back into algorithms and or link graphs. Presumably, there is a semi-automated editing process involving validating and cleaning user-submitted data. So, this gives us something like this: Of course, there are many inputs that can be fed into other parts of the system. For example, sitemaps in Webmaster Tools help shape your index.
My goal is to give you an idea of the main concepts. As you can see, very C Level Contact List quickly. Update: Pandas, Penguins, and Hummingbirds Finally, we have the algorithm update we all know and love. In many cases, an update is actually just a change or addition to a small portion of the code. Over the past few years, though, algorithm updates have become trickier. Let's start with , which was originally launched in month of year. The update is more than just a tweak to the code; it was (and probably still is) a sub-algorithm with its own data structure, existing outside of the core algorithm (conceptually speaking.

Every month or so, the algorithm is re-run, the data is updated, and that data feeds what you call your ranking factors back to the core algorithm. The operation of may be similar to this since it is a sub-algorithm and separate data set. We'll put them outside the big blue oval: I don't mean to imply that and are the same; they work in very different ways. I would simply suggest that both algorithm updates rely on their own code and data sources, and only be fed back into the system periodically.