Now, however, Matthew Yglesias has done me the favor of explaining how Amazon got its taxes down from over $2 billion (had it paid 21% on $11 billion) to zero
If this had happened in 2017 or earlier, net operating losses (NOLs) might have been the lead suspect, at least before one actually examined the evidence. Amazon had big losses for years, and there’s absolutely nothing wrong if a company (genuinely) loses $11 billion in Year 1, then earns $11 billion in Year 2, and pays no Year 2 tax by reason of the NOLs from Year 1. But ironically, the 2017 act effectively presumes that there would be something wrong here, as it limits NOL deductions to 80% of current year profits. This was presumably directed at optics, more than at substance, although note that the disallowed excess NOLs can be carried forward indefinitely.
Yglesias finds three main causes in the record for Amazon’s zero tax liability. The first is research and development (R&D) credits, which he notes get wide academic support, because research may yield positive externalities.
The second is temporary expensing for investing in equipment, which he notes is more controversial than R&D credits, but also gets some support across party lines. But I’d add two points here. First, expensing makes more sense when one is doing more than the 2017 act did to limit the tax benefit from effectively pairing it with interest deductions, which can cause debt-financed investment to be better than exempt. Second, there may have been a temporary transition effect insofar as, in 2018, Amazon was combining expensing for new equipment with continued depreciation for past years’ equipment. This overlap from the shift between regimes would be expected to diminish in future years as expensing remains in place. And if equipment expensing is indeed allowed to expire as currently scheduled, then in the first year after the expiration Amazon’s tax liability, relative to reported profits, might be unusually high (all else equal), by reason of the opposite regime shift.
The third cause for Amazon’s zero tax liability, despite its $11 billion in reported earnings, is that its surging profits, by driving up its stock price, increased its deductions for paying stock-based compensation to its executives. Yglesias explains why allowing the deduction may make sense from a corporate income measurement standpoint, leaving aside the corporate governance issues that may be associated with very high executive compensation, but I’d add one more thing. The $1 billion in stock-based compensation that he reports as having been deducted by Amazon presumably DID lead to significant tax liability on the executives’ part – indeed, one would presume, at a tax rate that was generally well above Amazon’s 21 percent corporate rate. So the paid-out $1 billion in profits was indeed taxed to someone, and perhaps at more than the U.S. corporate rate, unless there were tax planning machinations at the individual level.
Insofar as this $1 billion was taxed by the U.S. at the individual level at a rate above 21%, I’d view that as an entirely adequate substitute for Amazon’s being directly taxed on the same value.
2001’s HAL, of course, wasn’t a robot, if one’s definition of the term requires creature-like embodiment. But he was enough like us to be capable of going mad. (For that matter, I once inadvertently gave an otherwise well-adjusted pet iguana a seemingly neurotic aversion to going into his water bowl when there were people around – he instinctually expected to be safe when he was in there, so was startled to find I’d just take him out of the cage anyway. I concluded that at least fairly intelligent animals – iguanas, surprisingly, qualify! – can develop neurotic aversions. But I digress.)
Yesterday’s colloquium guest, Susan Morse, presented an interesting paper that addresses how the rise of automation and centralization in legal compliance may transform the character of enforcement. These days, lots of tax filing involves the use of software – for example, Turbo Tax, H&R Block, TaxAct, Tax Slayer, Liberty Tax, and proprietary products that, say, a leading accounting firm might deploy with respect to its clients.
These programs, though hardly on the more sentient side of AI (unless one agrees with David Chalmers about thermostats) might nonetheless have their own versions of “I’m sorry, Dave, I’m afraid you can’t deduct that.” An example that I’ve heard about, from a few years back, concerns Turbo Tax and how to allocate business vs. personal use regarding expenses that relate to a second home which one also rents out. By days of specific business versus personal use, or using a 365-day base? Apparently there are arguments for both approaches, but Turbo Tax either heavily steered people one way, or actually “refused” to do it the other way.
What’s more, they might offer an opportunity for centralized enforcement – for example, for the IRS’s directly collecting from Turbo Tax the taxes underpaid on Turbo Tax filings, at least where this reflected an error in the Turbo Tax program. The amount might be estimated, rather than calculated precisely as to each particular Turbo Tax-filed return.
In this scenario, if we assume that Turbo Tax isn’t going to try to get the money from individual filers (and note its current practice of holding customers harmless for extra taxes paid by reason of its errors or oversights), then in effect it will add expected IRS payouts into its prices, making it somewhat like an insurer.
The paper’s goal is not to advocate approaches of this kind, but rather to say that their increasing feasibility means we should think about them, and about the broader opportunities and challenges presented by rising automation and centralization, both in tax filing and elsewhere.
I myself have tended to see Turbo Tax, which I used until recently, as little more than a glorified calculator and form filler-outer. For example, it allows one to spare oneself the enormous nuisance of computing AMT liability (a real issue for New Yorkers pre-2017 act), or of having forgotten to include a small item until after one had already made computations based on adjusted gross income. So Intuit would really have had to screw up something basic, in order for Turbo Tax to have gotten my federal income tax liability wrong.
Nonetheless, especially if these programs become more HAL-like, but even just today when they offer a data and collection source, they can become important loci for federal enforcement and collection efforts. The paper notes, however, that there might be issues both of capture (Intuit manipulates government policymakers to favor its interests) and of reverse capture (Intuit, despite incentives to please customers that push the other way, decides on “I’m sorry, Dave” by reason of its relationship with the tax authorities).
Here’s an example that occurs to me – although I suspect it’s not actually true. New York State created certain charities, gifts to which qualify for an 85% credit against state income tax. Thus, if at the margin I can’t deduct any further state income taxes on my federal return, giving a dollar to such a charity costs me only 15 cents after state tax, but leaves me 22 cents after federal tax, if a full $1 charitable deduction is permissible and my federal marginal rate is 37%.
The Treasury has taken the view that, under these circumstances, my permissible federal deduction would only be 15 cents (i.e., the contribution minus the value of the state credit) – even though claiming simple state charitable deductions is not thus treated. But the Treasury might be wrong – i.e., it depends on what the courts ultimately decide, if the issue is litigated. Suppose, however, that Turbo Tax, which has you list the names of the charities to which you have given deductible contributions, in effect said “I’m sorry, Dave” once you had typed in the requisite name. This would in effect be reverse capture (although I doubt that Turbo Tax actually works this way, and it wouldn’t be hard for taxpayers to think of simple workarounds). It might impede taking the deductions, and then fighting the IRS in court if necessary, while using Turbo Tax.
One of the paper’s important themes concerns the relationship between (1) finding someone (such as tax sofrware providers) to serve an insurance function, and (2) being able to improve taxpayer incentives, at least in one particular dimension, by having no-fault penalties for underpayment of tax. (I wrote about this issue here.)
To illustrate the reasons for and problems with no-fault penalties, consider the following toy example: Suppose I can either owe $50 of tax with certainty, or else engage in a transaction the correct treatment of which is legally uncertain. If I do it and the issue is then resolved, there’s a 50% chance that I’ll owe zero, and a 50% chance that I’ll owe $100. So my expected liability is $50 either way – assuming that the latter transaction will definitely be tested.
In reality, however, what we call the “audit lottery” means that I can do the transaction, report zero liability, and be highly likely never to have it examined. Suppose that the chance of its being examined was as high as 50%. Even under that, probably quite unrealistic, scenario, my expected tax liability, if I do the transaction, is only $25. 50% of the time it’s never challenged, and 50% of the time when it’s challenged I win.
This is actually a pervasive issue in tax planning, inducing taxpayers to favor taking uncertain and even highly aggressive positions because they might never be challenged. The key here is that one generally won’t be penalized if one loses, if the position one took was sufficiently reasonable. A 50% chance of being correct would easily meet that threshold.
The solution to this incentive problem was stated long ago in work by Gary Becker concerning efficient crime deterrence. Suppose a crime one might commit has a $100 social cost. With certainty of detection, the Becker model advocates a $100 fine (leading, of course, to the notion that there are efficient crimes, e.g., one that I commit anyway because the benefit to me is $105). But then, Becker notes, there is the issue of uncertainty of detection. If there’s only a 50% chance that I would be caught, then the penalty, from this standpoint, ought to be $200.
Ditto for the above tax planning / audit lottery example. Given the 50% chance of detection, it all comes out right (in terms of ex ante incentives, ignoring risk) if we say that I have to pay $200, rather than $100, in the case where I am audited and lose. This is a no-fault or strict liability penalty, ramping up the amount I owe in order to reverse out the incentive effects of the audit lottery.
But what about the fact that I apparently did nothing wrong, yet am being penalized? Surely it’s not unreasonable for me to take a position that has a 50% chance of being correct. And we don’t currently require that taxpayers flag all uncertain positions in their tax returns – partly because the IRS would never be able to check more than a small percentage anyway. While I’m not being sent to jail here, there is an issue of risk. But before turning to that, consider one more path to the same result: Steve Shavell’s well-known work concerning negligence versus strict liability.
Shavell doesn’t have a multiplier for uncertainty of detection in his simplest model (although I’m sure he deals with it thoroughly somewhere). But he notes that strict liability produces more efficient outcomes than negligence where only the party that would face the liability is making “activity level” choices. E.g., if drivers don’t have to pay for the accidents they cause unless they’re negligent, they’ll drive too much, by reason of disregarding the cost of non-negligent accidents. (It’s more complicated, of course, if, say, the pedestrians they would hit are also deciding on their own activity levels.)
Returning to tax uncertainty, the problem with a negligence standard for underpayment penalties is that it leads to an excessive “activity level” with respect to taking uncertain positions that might be wrong yet remain unaudited. Strict liability is therefore more efficient than negligence at this margin, unless we add to the picture the equivalent of activity-level-varying pedestrians. For example, we might say that the government’s losing revenues from uncertainty plus < 100% audit rates gives it an incentive to try to reduce uncertainty. But I don't personally find that a very persuasive counter-argument in this setting. Okay, on to the problems with strict liability tax penalties. Let’s suppose in the above toy example that my chance of being meaningfully audited on this issue was only 5%. Then the optimal Becker-Shavell penalty is twenty times the under-payment, or $2,000. Add a few zeroes and, say, a $10,000 tax underpayment (as determined ex post) leaves me owing $200,000. Or, if the chance of a meaningful review was 1%, the short straw leaves me owing $1 million – even though we may feel I did nothing unreasonable. (Again, the disclosure option, while in special circumstances required under existing federal income tax law, can’t go very far given the costliness of review – which is itself a further complicating factor for the analysis.) If I am risk-averse, the burden this imposes on me may yield deadweight loss (other than insofar as we like its deterring me). From an ex post rather than ex ante perspective, it leads to particular outcomes that we may find unpalatable. A further, but lesser, problem is that it may be hard to compute audit probabilities accurately. Note, however, that requiring negligence is equivalent to presuming a 100% audit chance, in cases where it would not be found. From that perspective, multipliers that are still “too low” but greater than 1.0 at least do something to improve incentives around uncertainty and the audit lottery. So the risk problem arguably weighs more heavily against strict liability than the difficulty of getting the multiplier just right. This is where insurance comes in. The above problem goes away if taxpayers with uncertain positions can and do transfer the risk, for an actuarially fair price, to counterparties that can price and diversify it properly. But tax insurance is not widely available, and is hard to price. Hence the potential appeal of recruiting entities (such as Turbo Tax) that sit in a centralized position, if doing so doesn’t create overly bad problems such as adverse selection or moral hazard. (Adverse selection is inevitably an issue, however, if not all taxpayers use entities that can be recruited to serve, in effect, as insurers.) One further issue, in this regard, on which the paper touches is the feasibility of a system that would, say, charge Turbo Tax for user underpayments that reflected factual inaccuracies in the data that one entered. Can we even imagine a system in which, if I used Turbo Tax and left out a $10,000 cash payment that someone had made to me, it was liable?
The answer would seem to be no, but actually it’s a bit more complicated. Consider car insurance. The insurer will typically pay for accident costs even if they’re completely the driver’s fault, reflecting wildly inappropriate behavior (such as driving drunk, running red lights, texting while driving, etc.), In other words, the insurer loses if the driver is negligent, even though negligence is under the driver’s control or at least influence.
How is such insurance coverage feasible? Well, it certainly creates moral hazard, but there are ways of addressing it, such as literal coinsurance (such as ffrom deductibles and copays), implicit coinsurance (such as from collateral psychic or other accident costs to the driver that aren’t covered), and future years’ insurance rates that will now presumably be higher. So it’s feasible to have at least some car insurance for negligent drivers, despite the issue of moral hazard.
By extension, we could conceivably have a model in which Turbo Tax was liable, at least in part, even with respect to factual errors made by its customers, so long as analogous mechanisms sufficiently addressed moral hazard. But this still of course leaves the problem of mandatorily drafting software providers to serve as insurers by imposing no-fault collective liability, if strict liability doesn’t apply to taxpayers who file without using such providers.
Returning to the paper, it doesn’t purport to resolve any of these questions, but rather to begin laying out and addressing them. This particular piece will be appearing shortly in the University of Illinois Law Review, but I’ll be looking forward to Morse’s further work in the area.
It’s called “The Games They Will Play: Tax Games, Roadblocks, and Glitches Under the 2017 Tax Act,” and it’s available for download here.
The abstract goes something like this: “The 2017 tax legislation brought sweeping changes to the rules for taxing individuals and business, the deductibility of state and local taxes, and the international tax regime. The complex legislation was drafted and passed through a rushed and secretive process intended to limit public comment on one of the most consequential pieces of domestic policy enacted in recent history.
This Article is an effort to supply the analysis and deliberation that should have accompanied the bill’s consideration and passage, and describes key problem areas in the new legislation. Many of the new changes fundamentally undermine the integrity of the tax code and allow well-advised taxpayers to game the new rules through strategic planning. These gaming opportunities are likely to worsen the bill’s distributional and budgetary costs beyond those expected in the official estimates. Other changes will encounter legal roadblocks, while drafting glitches could lead to uncertainty and haphazard increases or decreases in taxes. This Article also describes reform options for policymakers who will inevitably be tasked with enacting further changes to the tax law in order to undo the legislation’s harmful effects on the fiscal system.”
When we talk about a bone marrow transplant, we are referring to a medical procedure. The procedure is done for replacing damaged bone marrow. The damage could have occurred because of chemotherapy, infection or certain types of diseases including leukemia and other forms of blood cancers. The procedure is about transplanting healthy blood stem cells. The blood stem cells move to the bone marrow where they start producing healthy blood cells and also assist in the growth of new marrow. Bone marrow is important for healthy blood cell production. It is spongy and fatty tissue which is located in the hollow of our bones. It is useful for generating red blood cells, white blood cells, platelets and other forms of cells.
What Happens In Leukemia
When a person suffers from leukemia, he or she is suffering from a form of blood cancer. These patients have a situation where the bone marrow does not produce enough healthy blood stem cells. Instead, it starts producing unhealthy stem cells which are of odd shapes, sizes, and other attributes. These stem cells start the process of giving birth to cancerous cells that start overwhelming the healthy cells. This leads to anemia and other related problems. Hence, patients suffering from leukemia need to have health stem cells in their body and this is where the role of bone marrow transplant becomes important.
Why The Need For Bone Marrow Transplant
As mentioned above, a bone marrow transplant is needed when the body is not in a position to produce the required amount of healthy blood stem cells. Unless the damaged blood stem cells are removed and replaced with healthy ones, the patient could slowly be slipping out of control and would most certainly die within a few months or a year at the most. Therefore, whenever a patient has been confirmed to be suffering from leukemia, Bone marrow donor match is considered to be one of the most important options.
The Risks Associated With Bone Marrow Transplant
There is no doubt that BMT or match for bone marrow donation Transplant is a major medical procedure and there are risks involved in it. The patients could exhibit quite a few symptoms and these include nausea, headache, and drop in blood pressure, shortness of breath, pain in the body, fever and unexplained chills during the summer season and a host of other such problems. Therefore these risk factors must be weighed against the benefits and your doctor or specialists are the right person to take a call on this.
There are other factors which also should be taken into accounts such as the age of the patient, his or her overall health, the disease for which the patient is being treated, and the type of transplant that is required. Here again, your doctor or the specialist handling the case is the best person to decide as to whether you are the right candidate for a bone marrow transplant.
The Final Word
In fine, there is no doubt that leukemia, like all other forms of cancer, is a dangerous ailment. However, if a diagnosis is made accurately and at an early stage, it is curable with the help of bone marrow transplant and other methods of treatment.
Gift of Life Marrow Registry
800 Yamato Rd suite 101
Phone: (800) 962-7769
Prisoner’s dilemmas are pervasive in public policy. One gets them whenever there are positive or negative externalities that no institutions (be they Coasean markets or Pigovian taxes and subsidies) adequately address.
Pollution and over-fishing are among the classic examples. E.g., if I want to drive my car a lot, run the heat and AC to the max, etc, but everyone’s doing this causes catastrophic global warming, then, from a selfish standpoint, the best thing would be if everyone BUT me curtailed their activities suitably. But, given my individually trivial contribution to the overall problem, I’m best off defecting whether or not everyone else is cooperating, absent sanctions or other ways of internalizing to me the marginal cost of my causing carbon release.
With selfish players, a one-shot prisoner’s dilemma has a simple Nash answer: everyone defects, so everyone loses relative to the case where everyone cooperated. While there may be real world mitigating solutions, such as repeat play with sanctions from the other players, wouldn’t it be nice if people were willing to cooperate voluntarily, despite the selfish unilateral incentive to defect?
John answers: Not only would it be nice, but we do in fact frequently cooperate! So the Nashian view of people as always selfishly pursuing just their own welfare is inaccurate. Indeed, evolution has yielded in us a species that is unusually, and among the great apes uniquely, inclined towards cooperating with each other under suitable conditions (such as where we feel solidarity and trust towards fellow group members).
While sanctions for defection may plan an important role in preserving cooperative non-Nash equilbria, they’re not the only reason we cooperate. Nor is altruism the main reason, as it tends to be limited to a much smaller core group (such as immediate family) than the set of people with whom one is willing to cooperate.
John also finds it largely unhelpful to posit exotic preferences, such as a “warm glow” achieved subjectively by cooperating, as the explanation for the behavior. It seems to him both too hand-tailored (like Ptolemaic epicycles to reconcile celestial movements to data) and backwards, in the sense that I don’t cooperate to get a warm glow, even if I in fact get one from cooperating. I cooperate because I believe it’s right to do so.
While I see his point here, I think the “warm glow” framing is intellectually helpful for a particular reason. Even if I cooperate because I think it’s right to do so, and that this differs from eating chocolate because I think it tastes good, real-world cooperators are likely to be trading off their desire to cooperate against other things they care about. Suppose I recycle because I think it’s right to do so, not because the city might find out and fine me if I don’t. I still would likely start recycling a lot less if, say, it took several hours a week.
John says that those who cooperate, rather than defect, in prisoner’s dilemmas are generally being Kantians, as I’ll discuss shortly. But while the paper we discussed yesterday doesn’t discuss Kantianism that’s limited by one’s trading it off against selfish preferences, it does discuss conditional Kantians – that is, those whose willingness to beuave cooperatively depends on how prevalent they believe cooperative behavior is in the relevant population. (See Figure 1, at page 33 of the paper, for a visual depiction of an equilibrium at which the % actually cooperating equals the % that are willing to cooperate at that level of cooperation.)
I gather that philosophers have questioned this set-up, saying you aren’t actually a Kantian if you’re being conditional about it. While this is true as a matter of definition, once one has defined Kantians as they choose to, it is intellectually unhelpful, and would appear to be an instance of narrow-minded and retrograde siloing (an inclination that I’ve encountered from other disciplines, in my project on literature and high-end inequality).
Returning to prisoner’s dilemmas, a Kantian who faces one may ask: What is the decision that would be best if ALL of us made it? With the classic PD structure, the answer (of course) is Cooperate, don’t defect. So the Kantian does what would be best if all did it, simply because this is the right thing to do, and not based on any actual presumed effect of one’s own decision on what others will decide. So the Kantian (for example) recycles – and, I would think, also considers following a code with respect to carbon emissions that, if universalized, would properly curtail global warming and other adverse climate change.
But how does one identify the proper Kantian course of action? In a simple prisoner’s dilemma set-up, it’s obvious, since there are just two choices, Cooperate and Defect. Maybe one should think of recycling that way. As to global carbon abatement, it’s not as clear, not to mention that the motivation to cooperate (even assuming one can determine how) will be weaker if one is among John’s conditional Kantians.
John notes that many people do in fact recycle, beyond the point that sanctions and conventional incentives would seem to be inducing. There may also be a bit of Kantian behavior around carbon abatement. For example, while I am sure I do not do nearly enough in that regard, or as much as I would do if I were responding via standard incentives to a global carbon tax that had been set at an appropriate level, it is something I have in mind, and that induces me to disfavor what I feel is overly wasteful behavior. So yes, I am, upon a reflection, somewhat of a Kantian, albeit a conditional one both in John’s sense of being influenced by what I think others are doing, and my sense of trading off my preference for doing what is right in the Kantian sense against more selfish considerations.
In calling my own behavior Kantian, however imperfectly so, I am agreeing with John about the underlying psychology. Whether or not the categorical imperative is exactly the right formulation, the underlying sentiment of fairness does appear to me (from self-reflection) to have something to do with symmetry and consistency between what people do for themselves and expect from others. And in my case, but I suspect for many other people as well, a lot of it is driven by notions of reciprocity. I neither want to be a sucker, who cooperates when everyone else is defecting, nor a jerk, who defects when everyone else is cooperating. This gives psychological appeal to conditional Kantianism. And it’s not just me, if tit-for-tat sentiments, embracing both the good and the bad, are more generally intuitive.
But what does all this have to do with tax? I’ll address that in a separate post.
As it happens, I didn’t end up recounting the story either in the AM class or at the PM public session, as it would have taken too much airtime. But I’ll indulge myself by leading with it here, before turning more particularly to the paper in a follow-up post.
It’s September or perhaps very early in October 1974, and I’ve recently arrived at Princeton University as a 17-year old freshman. (I later ascertained that 94% of the class was older than me – this in an era when 18 was the legal drinking age and there was a on-campus student pub at which you’d be carded.)
Having both a competitive nature and a family background that placed intense value on “intelligence” and academic achievement, I was eager to rate myself against the field, as well as judge myself against demanding self-expectations. I also made a point from the start of taking classes in which there were frequent student papers, because I liked writing, along with the greater control over content that they offered relative to answering exam questions.
The first short paper I got back, presumably in history or political science, came out in accordance with my self-demands. But then came the second one, in Intro to Moral Philosophy. This was a lecture course taught by Thomas Scanlon, but my “preceptor” (as they called the leaders of the weekly small-group seminar meetings) was a graduate student in the philosophy department whose name I still recall.
This paper’s subject was Kant, and more particularly the categorical imperative, which might be stated (per Wikipedia) as follows: “Act according to the maxim that you would wish all other rational people to follow, as if it were a universal law.”
Intellectually unformed though I then was, I realized that, in interpreting it, one faces what I might today call a “level of generality” problem. The example I thought of was as follows: While it DOES mean, say, that I shouldn’t lie because if everyone lied we would lose the ability to have the truth believed, it surely DOESN’T mean that I can’t go to the Wawa Market on Alexander Street at 8 pm, on the ground that no one could go there if everyone tried to at the same time. So, in attempting to apply the categorical imperative, there is a broader issue, which may have no simple or obvious answer, regarding the level of generality at which one should state the maxims that one is testing for rational consistency.
To this day, I don’t think that’s bad for what was presumably a 2-page (or at the most 5-page) paper in an undergraduate Intro to Philosophy class. But I got it back with a grade of C+ and some sort of peremptory, even angry or at least disgusted / impatient, scrawl – which might as well have been in crayon – to the effect of: No, that’s wrong, that’s not what the categorical imperative says. No effort beyond that to engage or explain where or how the grad student thought I had gone wrong.
These days, when a student gets a poor grade and comes in to see me, I’ll try to reconstruct the reasons for it (if it’s an exam that doesn’t have comments like a graded paper), but I’ll also say very strongly if this appears to be among the student’s concerns: This DOESN”T mean you’re a bad student, or not good at law or at tax, etc. – it’s just a thing that happened one time in terms of answering one question that might have been either well or poorly chosen (and then graded) by me.
But I didn’t have the older me to tell me this at the time, nor did I go talk to the graduate student, towards whom I now felt hostile. (Plus, I knew it was generally bad form to complain about grades.) What I should have done, of course, is go see Scanlon – not to complain about the grade as such, but to get broader dialogue and feedback, but the thought of doing this never occurred to me. I think I viewed him, through no fault of his own, as too far removed and remote from me.
Taking the whole thing far too seriously, I was shaken by the grade, which hurt my self-confidence (hence, I told no one about it at the time), even though I felt that it was misguided, unfair, perhaps biased for some specific reason that I couldn’t fathom, and stupid. I also concluded that maybe I wasn’t fated to do as well in philosophy classes as in other liberal arts subjects. I responded by working more diligently for the rest of that semester then I ever would again. (Once I had restored my self-confidence via my final fall 1974 results, I continued to take my schoolwork, for the most part, reasonably seriously, but I developed a tendency to prefer pursuing my own intellectual interests to those of a particular course or instructor.)
Anyway, the very interesting Roemer paper raises, among other questions, that of how good Kantians should frame the maxims that they are hypothetically universalizing in their minds. Depending on the context, the answer to this question is sometimes clear, but other times much less so.
I see about an 80% chance that I will end up recounting the tale of the unfair bad grade (worst of my career) that I got as a freshman on a Kant paper. Not that I’m still brooding about it or anything!