The Ideology of the Smartphone

Here is a short excerpt from the second chapter of the book I’m working on.  To set it up briefly, I’m trying to write something somewhat in the form of Cicero’s Obligations: a message to my grown children, letting them know what I’ve spent my life learning.  But unlike Cicero, I also want to suggest that they NOT live their life as I did, that they begin from where it took me my whole life to get, and do things differently.  For this reason, the book is addressed to an imagined reader (my daughters after they graduate from college), but meant to be of use to others in a similar situation.  The “narrator” then is a sort of persona, although not terribly unlike me he is meant to be a bit more level-headed and reflective than I usually am in real life.

If you happen to be reading, please do let me know your thoughts on this—it is something I am quite concerned about making clear to my own children, as well as to others.

Here is the excerpt:

I have so far emphasized how important ideology is, all the positive things it does for us.  But it is important to remember that there can be “bad” ideologies as well, and they are often difficult to become cognizant of because they seem so necessary, natural, and even harmless.  A bad ideology is one that alienates us even as it promises us enormous benefits.

So I will offer an example of an ideological practice that you engage in so often now that you probably have come to conceive of it as something completely outside of ideology, as a space in which you step outside of ideology, outside of the practices which reproduce our relations of production. The practice, that is, in which most people think they are set free from such concerns and enter into a realm of limitless information and freedom from all material constraints.  Of course, I’m talking about your smartphone.  I can see you roll your eyes.  You want to put the book down now.  You’ve heard my jeremiads about cellphones all your life.  But keep in mind that this is the kind of resistance we feel about considering anything that is really a part of our ideology—we aren’t bothered, or bored, by a critique of someone else’s ideology.  Only if the critique threatens to weaken our strongest attachments do we begin to feel that “boredom” that is the strongest psychological defense against new ideas that might force us to change.  Your phone is one of the most important ideological practices of global capitalism, and it wouldn’t work if you weren’t so powerfully attached to it.

A half a century ago, the closest thing we had to serving this function was television.  From the moment it started, we had a love-hate relationship with the TV.  We complained constantly about how it was ruining the minds of the young, making us all sex-crazed and violent.  But we paid good money to have the best and newest one in our house, arranged our houses around television screens, gave up clubs like Kiwanis and social activities like bowling leagues to devote time to evening programs. Eventually, we were even willing to pay sums as much as half our monthly rents to get cable, so we could have more channels, more “choices,” and fewer interruptions of our viewing.  Television told us what to think, and how to live.  It was itself a profitable commodity, but it’s main function was to advertise other commodities.  And it did so much more.  It trained us to stay at home with just our immediate family, fighting over control of the remote.  It taught us that we had choices, we could be a liberal watching PBS or a conservative watching CBS…but that there was a limited range of what choices existed.  We could watch All In the Family and The Jeffersons if we were democrats, or Bonanza and The Waltons if we were republicans, but that defined the spectrum of political possibilities in the world.  Television did its job well, teaching us what to want, but also shaping how we lived, how we organized our living space, and who we associated with.  

The most important thing I learned from reading Raymond Williams’s pioneering book Television is that what we complain most about is usually exactly what a new technology is meant to do. We were distressed about sex and violence on TV, Williams explains, but in fact our concern was a mere smokescreen.  As he puts it, “we assumed that violent behavior is unacceptable…but it must be immediately evident,  if we look at real societies, that this is not the case”(125).  Television is meant to encourage violence and hyper-sexuality.  Another way to see what television was meant to do is to look at the laments now, in the “post-television” age, about what we have lost.  We read essays in newspapers and magazines lamenting the fact that there is no longer a shared culture—that with the advent of streaming and the explosion of new content, there are no “water-cooler shows” that we all watch and discuss the next day.  This production of a relatively uniform populace, with a limited range of preferences and tastes, was what television did to enable the capitalist social relations of the late-twentieth century, when the most important task was convincing everyone to buy the same things, and to sacrifice their life at tedious jobs for the reward of commodities and the entertainment of Gilligan’s Island.

What if we apply the same critical approach to the smartphone?  

Raymond Williams’s analysis of television was powerful and influential, I would argue, because of an important insight into how we think about new technology.  Most people who were worrying over television labored under the assumption that our society went along just fine, and then out of the blue this accidental technological advance dropped into our midst and altered everything.  What Williams explains is that television wasn’t a a fortuitous arrival—many people, and many corporations, spent decades and millions in producing it.  Television was the result of a long effort to find something that would do exactly what television did. Not because it was a “need,” perhaps, but because it was understood to be a good way to accomplish certain ideological tasks.  That is, American capitalism would have continued without is. Could even have found other ways to withstand the threat of communism following the great depression.  But television was beneficial to those goals, and turned out to be the way we did advance capitalism and withstand the threat of socialist ideologies.  If all your entertainments are thoroughly commercialized—that is, if the only way a form of dramatic arts gets to the public is if it sells the most laundry soap or instant coffee—then surely our aesthetics, which is where we produce most of our ideology, will be capitalist to the very core.  (Don’t worry too much right now about this claim that aesthetics works to “produce” ideologies—we’ll cover that in a later chapter as well.)

Well, the same is clearly true of smartphones.  It was a long and expensive process to produce a form of communication perfectly suited to produce the kind of subject needed for the new world order of global capitalism.  Think about all the things a smartphone can do:

  • Nobody engages with others in person anymore.  Participation in social and political organizations, already weakened by TV, declined drastically once again with the advent of the smartphone.  When in public, almost nobody looks up from their phone even long enough to acknowledge another person’s presence.  Ellen DeGeneres jokes that when at home with her girlfriend the two of them lie on different sofas sending “memes” back and forth instead of interacting in with one another directly. We all laugh at this, but it is a sad reality. Watch two young people on a date, sitting across the table at a restaurant looking at their phones.  
  • Nobody even talks anymore—it is a rare and odd occasion when someone uses their phone to speak!  Phones are for texting only, a very limited, and for someone looking from the outside bizarrely inefficient, way to get things done.  Instead of a two-minute conversation to arrange a meeting, people send unclear texts back and forth for ten or fifteen minutes, exasperated by one another’s inability to understand what each wants to do.  But we avoid things like tone of voice, and so can substitute silly little pictures we call “emoticons” for things like emotional connection and shared intentions.
  • Everyone has an individual “feed” which is suited to their own beliefs and interests. This ensures that we are never exposed to any facts or arguments that might weaken our commitment to what we already believe, or lead us to question our assumptions about the world.  More importantly, this serves as a practice to actually make “true” the postmodern belief that there are no truths, only subjective beliefs.  We are sure, now, that nobody can ever be persuaded by mere concrete factual evidence and cogent logical arguments—our “beliefs” arise from deep within, and are impervious to change by any amount of experience or thought.  
  • The phone insures that we don’t need to really investigate any topic deeply and thoroughly.  Most people are sure that anything they need to know they can find out from “Google” quickly, with a short and easy explanation. Anything more complex than this, that might require putting down the phone and reading a book, isn’t important enough to attend to.  (I hope you can put your phone down long enough to read this book—I promise to make the rest of the chapters shorter than this one, so you won’t have to be away from your feed for more than half an hour at a shot.  Unfortunately, ideology just is one of those concepts you can never get a clear idea of without some sustained attention and effort).
  • Smartphones supply us with constant stimulation, always promising that the next text or the next notification in your feed will provide some momentary pleasure.  The pleasure here is always passive, of the kind that the investor gets when he watches his stocks rise while he does nothing.  What this offers is effortless and spontaneous validation and entertainment.  The result is that we become someone who has trouble with finding enjoyment in something that we initiate, that requires effort, and that is done without an audience who can rate, like or comment on it.  We become the kind of passive stimulus-response machines, never acting without an external prompt and always needing an instant reward, that late capitalist social relations needs us to be.  
  • Worst of all, keeping “connected” with dozens of people by text is powerfully isolating, leading to a general sense of meaninglessness, loneliness, depression and anxiety.  Young people today are on psychiatric medications, or dying of drug overdoses and suicide, at rates never seen before.  When we “stay connected” with people, what we know are only what I call their “Christmas-Letter Personas,” the image they want to project to the world, of happiness and success.  Of course, some people may want to project a different, less cheerful image, but that doesn’t change the fact that on social media we only know of another person what they are think they have to be like to be popular. We don’t know their concerns about mortality, their existential crises, their puzzling over the meaning of life, or even their day-to-day struggles to get everything done.  And we are left isolated, and unable to find any meaningful projects to engage in that will fulfill us.  

So what does all this do, as an ideological practice, to help reproduce our existing relations of production?  It may be obvious, but I’ll spell it out anyway.

What global capitalism needs is fractured, isolated subjects, easily manipulated and easily controlled, unable to think clearly about how the world really works.  This way they cannot organize protests as their standard of living declines. They cannot even become aware of the fact that they need to go into enormous debts to get multiple college and graduate degrees to work at jobs where they will have longer hours to earn less (in terms of real spending power) than their grandparents did with a high school diploma and a forty-hour week.  They can’t figure out why they are so anxious and depressed, so they can be made dependent on addictive mind-altering mediations which make enormous profits for international corporations.  At the simplest level, they can be made to pay thousands of dollars a year for their all-important phone connections, and then thousands more for apps and subscriptions to multiple streaming services like Netflix, Hulu, Amazon Prime, etc.  Their greatest desire is get the newest technology, and they work to pay for it. The smartphone is a brilliant way to transfer enormous amounts of wealth from the general public to the wealthiest 2%, without the messy need to do things like manufacture cars or refrigerators—durable goods with practical uses that were the main source of wealth in industrial capitalism.  (We will discuss what exactly this “transfer of wealth” really means in a later chapter, too.  What it means is one of the things people need to be kept ignorant of to keep our current relations of production rolling along.)

The important point to take away from this is that smartphones are an ideological practice. They produce a kind of person who is willing to keep on doing exactly what they need to do in order to continue reproducing our current way of life.  If we keep in mind that this is a way of life that requires that most human beings on the planet live lives of misery and deprivation so that a few can live in affluence, and if we keep in mind that this is a way of life that is on the way to destroying most life on the planet, we might question whether this is a good ideology or a bad one.  We might, then, put down our phones…and find ourselves interacting with other humans in person, no longer in need of antidepressants or anxiolytics, and able to begin to change the world.  

We might, that is, begin to change the ideological practices we engage in.

That’s an excerpt from near the end of the second chapter, which attempts to explain the concept of ideology.  Of course, I’m looking at this from the outside, as someone who doesn’t own a cellphone.  But I suspect that may be the only way this can be done.  Those using the phones often seem unable to see the forest for the trees—focusing on the “usefulness” of that last text or how informative the last notification was, and so unable to see the overall effects of the practice they are engaged in.

My hope is that this is a new enough phenomenon that it shouldn’t be impossible to see it as optional.  My students are horrified when I say I don’t have a smartphone.  They are sure that it is essential.  “What would happen if your at breaks down?” they ask me.  Well, some of us are old enough to remember that we managed such little crises all the time without cellphones, and could easily do it again if we chose.  “How do you stay in touch with people?” they ask. Again, I’m hoping many of us are old enough to remember when we sat with a friend and had a real conversation about important things, no phone required.  We know that for most of human history jobs got done just fine without anyone being available 24 hours a day by text message.  Whatever the crisis you spend an hour texting about in the evening, it can usually wait until morning—and even with the intrusive texts, it almost always has to wait anyway.

But as I’ve said, after having a cellphone for a year (before there were smartphones) I decided it was more of a burden than a help, and I got rid of it.  So maybe I’m missing something crucial to the smartphone ideology here?  

Notes on Reading “Articulating Reasons,” part 2

Before they slip the sieve of my aging memory, I want to complete my notes on the points in Brandom’s book that will be important to the book I’m working on.

I’ll sketch here a few more of the fundamental orienting assumptions Brandom outlines, but in a way that is meant mostly to emphasize what is of importance to the argument I will want to (eventually) make.

What is a concept? Against the standard “intensional” understanding, in which a concept is a specific set of real conditions in the world, the inferentialist approach is to understand the concept as a kind of doing, as a way of making explicit what kinds of commitments one is making in undertaking some kind of action.  This has implications for the concept of “truth,” and helps overcome the “true justified belief” idea of knowledge.  (This has always seemed to me to simply beg the question—since if you could know when something is “true,” you wouldn’t need to worry about justification or belief…and if you could know when something is actually “justified”, determining whether it is also “true” would be redundant.  I tried to raise this question in the one college philosophy course I ever took, but the professor could not grasp my point.  And no philosopher I have ever raised it with does either. Nonetheless, I persist in believing this is a fatal flaw of the TJB theory of knowledge.)

Instead, what we have in Brandom approach is the idea that we  begin from an “appropriate doing,” (I’m still unsure how we would know we have one…but that’s another concern) and then try to figure what kinds of inferences “preserve” the “good moves.”  This is a bit puzzling in the description here, but seems to suggest that instead of starting with a given, and using rules of proper inference (logic) to extend it, we start from the proper actions, and then consider as “true” the inferences that enable it.

Another key issue is “semantic holism versus atomism.” This is something I have considered essential to any understanding of how symbolic systems work since my first encounter with Lacan decades ago.  The point here is that, as Brandom puts it, “one  cannot have any concept unless one has many concepts.”  This seems to me to be supported by the discussion of what symbolic means in Deacon’s book The Symbolic Species.  No concept can be grasped separately from a whole set of concepts which it entails.  Lockean theories of language tend to work the other way around—and this Lockean approach is essentially a way of avoiding the problematic truth that we get our concepts not empirically but socially, in a language that is already made by others and which we simply must enter into the use of.

A final issue here is the inversion of the understanding of logic.  What Brandom has in mind here is what I always took to be the Hegelian logic. That is, instead of assuming we can use logic to prove the truth of a claim from incontrovertible premises, logic might instead work to draw out the implications of what we take to be those incontrovertible premises. Logic, then, is less Aristotelean than Socratic—working to push us to become aware of what we are assuming but may not be recognizing.

Two more major points, then, from two later chapters in the book. There are many other important points in this little book, but these two are what will be most important to the kind of argument I want to make.

The first is from chapter two, in which Brandom attempts to “offer an account of the willas a rational faculty of practical reasoning.”  Discussing Kant, Brandom argues that “Kant’s big idea” is that what distinguishes language-using humans from other kinds of creatures is that we can be responsible for our commitments.  The smallest thing we can be responsible for is a “judgement” understood to mean “predicating a general term of a singular one,” which is to say, we are responsible for our aesthetic construal of the world.  Normative language, which is to say language of “oughts,” of what we ought to do, is a matter of making explicit what entitles us to certain judgements (how we know we are categorizing singular terms correctly) and what this commits us to (what kind of actions in the world would take if we accept this categorization).

An important point to follow from this is that we do not need to accept the standard Humean take on our capacity to use reason.  That is, the almost universally assumed belief that we have certain desires inborn and out of our control, and then we employ “reason” to figure out how best to achieve those desires.

On this understanding of how reason works, reason can, potentially, operate to help instruct us in what we ought to desire. That is, reason tell us what to want, instead of merely helping us to get what we cannot help but want.

The argument here is subtle, but I think cogent.  Desires are not to be understood as the ultimate premise of all actions, but of the kinds of collateral commitments I am making if I am going to do something at all. Take Brandom’s example of opening an umbrella in the rain.  The standard assumption would be that there must be an ultimate desire to stay dry, which motivates the reasoning p rocess: if I open the umbrella, I will succeed in achieving my desire to stay dry.  But Brandom’s point is that the “desire” statement may simply function to “make explicit the inferential commitments that permit the transition” from “it is raining” to “I will open my umbrella.”  That is, we might have a desire to get wet, sometimes. The desire statement, then, is not the ultimate real cause, but simply an indication that I am committing myself, in following the norm of umbrella use, to the act of not getting wet. This assumes that we can do things for all kinds of reason, for practical reasons, and those reasons can be be changed and are not therefore necessarily primary in all acts of agency.

One final point for now, from chapter six, about the nature of objectivity.  This is crucial to any defense agains the ubiquitous postmodern ideology of absolute relativism.

The crucial point here is that “the implicit representational dimension of the inferential contents of claims arises out of the difference in social perspectives between producers and consumers of reasons.”  What is most important for my overall argument here is that this assumes that we arrive an an “objective” understanding of the world not despite our  differences in assumptions and commitments, but exactly because of them.

The standard take on this problem is that we can never really communicate, since you begin from your implicit assumptions about the world and I begin from mine, and we each have different intentions…so that we never quite do understand one another, and in fact could not ever succeed in persuading one another to a change in position, since what would count as a “reason” is determined by our construal of the world, and so your reasons won’t be reasons for me.  This position is so common today that it often is simply assumed…but when a particularly damaging piece of evidence is offered against someone’s position, it is not at all uncommon today for them to state this position explicitly, and take it as an absolute refutation. The argument is common, and is often put in terms like  “I don’t take that argument seriously” or “postmodern theory teaches us that there is no objective truth” or something along these lines.  The proposer of the irrefutable argument or bit of evidence is then accused of claiming a “God’s eye view” or of being dictatorial or authoritarian.

This position offers an alternative.  Because what we do in language just is to ask for and give reasons for certain kinds of commitments, we can see the reasons someone else is committed to a certain way of acting in the world.  As part of these reasons for taking extra-linguistic action in the world, we often need to make reference to things, to “represent” things in the world.  (It is important here to remember that “representation” on this approach follows from the commitment to act—it is secondary, rather than primary as it would be in a Lockean epistemology).

I can almost always understand what thing in the world you are referring to, even if that thing in the world has a completely different “meaning” to me, because of my different set of concepts and my different intentions and commitments.  For me, what is crucial here is that we can then gain a sense of what our assumptions are only because there are other people with different ones. These assumptions may be fundamental, or simply perspectival—that is, we may share most of the same concepts and intentions, but have some minor difference in perceptual experience.  If there were nobody with a different set of assumptions, we would never recognize our assumptions as assumptions, and would have not hope of ever moving toward objectivity.  When we see that someone else conceives of a thing completely differently than we do, we see a way to separate out our assumptions from the object.  The point here is that, if we are thinking correctly, we should recognize that the only reason we can ever be persuaded to change our position on something is that we so often do encounter other language users whose positions are radically incompatible with our own.

If we could all accept his, and stop retreating behind the postmodern avoidance strategy as a way of clinging against all reason and evidence to our destructive intentions, we might have some hope of surviving as a species.

But that’s all on Brandom for now.  Next, I will probably try to post a part of the second chapter of the book I’m writing, in which I try to analyze smartphone use as an ideological practice—tricky to do, since I don’t own one and have never used on myself.  Or, perhaps, easier for me to do for exactly this reason.

I hope my notes on Brandom are clear and of some use to anyone reading this.  Any suggestions as to how to make these points in a more accessible manner will be appreciated.

Notes on reading “Articulating Reason,” part 1

Lately I’ve been looking to pragmatist philosophers to help me sort out the problem of language.  Clearly, I’ve always been an opponent of most American pragmatism—the reactionary politics and crypto-idealism of pragmatists from William James to Rorty is exasperating.  But there are some thinkers in this vein that take a fundamentally different approach, like Dewey and Sellars.  

What I find most useful about the inferential theory of language is the open assertion from the start of a fundamentally different set of founding assumptions about the nature of language and how its study should be pursued.  These seem to me to be all assumptions that are consonant with thinkers like Hegel or Lacan, but made much more explicitly. Brandom begins the book Articulating Reasons by laying them out in opposition to the more common assumptions with which philosophy of language has typically operated.  

I’m going to summarize some of them here, mostly to get them clear for when I will need them in later parts of my argument.  They seem to me to be simple points, but with enormous implications for a number of human pursuits from pedagogy to politics.  Although I may be wrong that these are “simple” points, since most people, including professional philosophers, seem unable to grasp them—but more on that later.

1). The first commitment Brandom outlines for us is the choice to focus on either the continuities or the discontinuities between “discursive and nondiscursive creatures.”  That is, do we assume that we humans are on a continuum or spectrum of communication that begins with chemical signals plants send and moves up through bees dances and through wolves’ growls up to human speech?  Or, do we assume that human speech is of a fundamentally different kind—that symbolic communication makes us unique among all sentient creatures that we know of?  I have always endorsed the latter, although it seems that the more widely held position is the former.  So most attempts to explain language begin with the study of the brain (ever since Locke, at least) and work up, or begin with a statement referring to the world and reduce down to the neural substrates of this behavior.  It seems to me clear enough on the evidence that no other species has had the capacity to alter its environment that symbolic communication has given us.  And certainly it should be clear enough how different proposing marriage is from the mating ritual of the stickleback fish.  Or how different the proposal for a new skyscraper is from a monkey’s use of a rock to open a nut.  

Beginning from this assumption is fundamental to recognizing  that we can change our behaviors intentionally in ways that animals cannot.  The desire to reject this (to me) obvious truth about humans is part of the global capitalist neoliberal attempt to avoid addressing social problems at a social level.  

2)  The second choice to be made is between what  Brandom refers to as “Platonism or pragmatism.”  The question here is whether the fundamental nature of human thought is to be understood as knowing that or as knowing how.  This one seems to be difficult for most people—since it seems obvious that to know is to have some kind of propositional knowledge about the world outside the knowing mind.  But I would follow Brandom here in beginning from the assumption that the origin of thought is in knowing how to accomplish something, a knowing how that occurs first outside of the symbolic system; knowing that occurs only after we begin to try to make our know how, what we might call our phronesis, explicit in communication with others. The propositional knowledge about what the world is really like is not primarily, but a consequence of our attempt to communicate, and improve, our phronesis (my term—Brandom refers uses the expression “know-how”).

This is certainly a point on which Brandom’s argument has changed my position.  I used to be firmly of the position that all thinking occurs only in language, that there is no outside to language.  And I would still argue that human subjects occur only in language.  But I would agree with Brandom that the capacity to respond to external objects as objects is a precondition for all language.  

Without understanding that knowing, even in language, is more a matter of knowing how to take an action than of how the world is in itself, we also cannot arrive at any understanding of how humans can have agency. If we believe that to have agency we must act  from a correct conceptual  “mirror” of the world, then obviously we can never begin at all.  We are then left with the now ubiquitous understanding of humans as simply “wet machines” which respond automatically to stimuli, often in unproductive ways.

3). “Is mind or language the fundamental locus of intentionality?”  Another crucial assumption we need to make explicit. As Brandom explains, in the history of philosophy it has usually been assumed that language has a “merely instrumental role in communicating to others thoughts already full-formed in a prior mental arena within the individual.”  His argument, on the other hand, is that language use is necessary to the formation of concepts, that a concept is produced only in the act of communication between individuals, not within the individual, and that it makes no sense to speak of “intentions” outside of the kinds of claims that are made about the world in language.  This one I will need to run by some readers, I think, because it has seemed so obviously true to me for so long, that I am often at a loss to understand exactly why it is incomprehensible to so many people.  Brandom spends less time on this than on some of the other issues he outlines for us, and so seems to think it needs less arguing for—and it is clear enough, I suppose…so why do most people still assume the opposite is the case?  In composition theory, for instance, the underlying assumption that thoughts occur outside of language and are then encoded in a language is so thoroughly taken as given that  composition theorists never seem to even feel the need to address the problem of language.  That is, they are so convinced that it is not a problem at all, that they don’t think about it—and so composition instruction is notoriously ineffective, repeatedly cycling through the same three “approaches” under new names every time they figure out the latest one doesn’t work any better with a new label.  What might it take to make this concept explicit to, say, someone whose job is to design a composition course for college freshmen?

4). One last issue for this entry, then.  Is concept use representational or is it expressive?  The usual understanding is that concepts function aesthetically; that is, they are general categories that function to collect specific instances of objects in the world.  They are, then, “representations” of reality, with the word “apple” pointing to a concept of “appleness” of which any given instance is a close-enough match.  (I’m setting aside here the platonic notion of forms, and the interested “third man problem” it raises.) A concept, then, is meant to capture the essential features of the thing in the world, and so to “represent” it.  What Brandom argues is that such “representations” come later, are a secondary effect of the primary goal of language, which is “expressive.”

Brandom wants to distinguish his use of the term expressive from a Romantic notion.  The Romantic idea would be that one has an emotion or intuition which is then (somewhat indirectly and inadequately) expressed in a gesture or in language.  What Brandom is interested in is an almost opposite use of the term expressive, in which what we have first is a commitment to some kind of action in the world, and then we try to “express” the assumptions and consequences of that action symbolically through language.  The point here is that language makes clear what in action or phronesis is not yet explicit. The Romantic notion of expression would insist that the expression makes something less clear—as would the standard notion of representation.  So language is seen as eternally inadequate, and a limitation on and hindrance of thought.  Brandom’s position would lead us to see language as in improvement on (not thought—since that occurs only in language—but) our ability to act in the world.  Asking for and giving reasons is not a falling away from authenticity into the sterile realm of rationality, but the development of true agency.  

What the introduction to this book does is to exemplify what inferentialism is about: the goal of making explicit the assumptions and consequences we are committing ourselves to whenever we begin to use concepts.   The postmodern relativism that seems to have convinced most people that this cannot be done has robbed us of our agency, and is on course to doom the human species to a painful extinction.  Convincing people to change the way they think about reasons—to break out what is often called the “Humean condition”, in which we can never act for a reason, in which our motives are determined and reason is purely instrumental, seems to be a crucial goal if we are concerned for our children’s well-being.  

There are five more such issues to discuss just to get through the introduction to this book.  All of them have, it seems to me, enormous practical implications.  Part of my goal later on will have to be to demonstrate how to put this way of construing the world to use in solving real world problems.