Regarding a previous post that addresses weaknesses with Carnap's Linguistic Frameworks approach. I wanted to suss out the last paragraphs a bit more fully.
This is a famous argument made by Sellars that attacks the combination of Foundationalism and Empiricism. It was conceived during a time of major upheaval in philosophy (the decline of Logical Positivism which had held sway over most of philosophy in the West for nearly five decades). The Myth of the Given accompanied various other criticisms (for example, those raised by Quine) that challenged almost all of the implicit or explicit assumptions that Empiricism made or required to justify it.
The gist of the argument is that Empiricists want to ground inferential knowledge and reasoning in sense percepts ("sense data"). The horn:
Read a great article on the subject. They summarize The Myth of the Given like so:
"[T]he proponent of the given is caught in a fundamental and inescapable dilemma: if his intuitions or direct awarenesses or immediate apprehensions are construed as cognitive, at least quasi-judgmental (as seems clearly the more natural interpretation), then they will be both capable of providing justification for other cognitive states and in need of it themselves; but if they are construed as noncognitive, nonjudgmental, then while they will not themselves need justification, they will also be incapable of giving it. In either case, such states will be incapable of serving as an adequate foundation for knowledge. This, at bottom, is why empirical givenness is a myth. (BonJour, 1985, p. 69)"
A few possible lines of reply that have been considered:
I'm a big fan of the following intersection of ideas that I believe addresses points 2 and 3 (regarding how something can participate in conveying justification but perhaps lack it directly itself) above:
Previously, I described the interaction of three ontological units or kinds as being the basis for SCO. Here, I'd like to tease out some of the differences a little more clearly:
Justification and inference-making live in 4 (theories). We reason about things through formal edifices (though the degree of formalization may vary quite a bit).
We also challenge whether or not structures are best, suitable, or sufficient for representing phenomena. Because I maintain that theories and structures are both largely independent and embedded into a "milieu" (a flexible "sea" of possible combinations as it were rather than a "pyramid") which mirrors actual scientific and mathematical practice, these can be detached and structures considered or evaluated from a meta level (e.g. - in a metalogic, in a metalanguage). That theorizing still occurs at a theory level (although one may have jumped from one theory into another, broader, and more meta theory).
As such, I maintain that phenomenal experience can lack justification (to attribute it is essentially a category error on my view) yet still be part of a justification-conveying system cutting off the The Myth of the Given as a viable critique of SCO.
I wanted to take some time to more fully flesh out my previous post since it's still a bit opaque:
So, to reprise a few other (jumbled) comments made elsewhere:
An unjustified belief nevertheless has a structure to it. And we reason about the structure of such beliefs (though the beliefs themselves lack justification).
That’s just what we do when we parse an argument and assess its validity or soundness.
Machine Learning demonstrates how primitive inputs (structured to lesser or greater extents) can be organized into concepts by way of an intermediate classification scheme. Non-inferential primitives are put into structures and reasoned about (through function fit and using other statistical associations) further.
The appropriateness of the representation of a pattern is often a subject of debate. So, such patterns require justification although the presented structures within our experiences are just that. But, such patterns are debated using reasoning systems (theories) - "top-down" or at least not "bottom-up" alone.
Again, I’m also not a Foundationalist - I think we often revise “lower level” concepts in light of “higher level” ones (Information Theory being applied to Physics for example). I dislike the “levels” metaphor altogether. I think this squares better with modern Machine Learning techniques since classifications are and function fit is evaluated iteratively through intermediate learning algorithms.
Machine Leaning shows us that concept formation involves both "top-down" and "bottom-up" notions working simultaneously. Given the use of training patterns (algorithms) a function is recursively defined on inputs and expectations. Conceptual classifications, taxonomic, or category assignments are made through statistical associations. In other words, intermediate algorithms define concepts from primitive inputs. Justification occurs here through statistical likelihood.