This post is about the game Hades by Supergiant Games, and how to enjoy it better. Link: on Steam.
Hades can be hard. Believe in yourself. You'll get better, and then a lot better.
Failing is okay. Failing is good. Failure is success.
Seemingly-insurmountable bosses become easier with experience and practice.
An incomplete run is still a successful run. It rewards you with various forms of progress:
First off, I recommend the official Discord server for Hades / Supergiant Games. The link is available in the game's main menu! There are dedicated channels for Hades discussion, with many helpful people, and useful pinned guides.
I found the following extremely informative and helpful: link. Advanced players should also read the following: link.
Hades can be very fast-paced. When you're uncomfortable with its speed, or simply want a closer look at the movements for training purposes, it's useful to slow the game down via slowhacks. Hades also has some unskippable sequences which you can fast-forward by speedhacking. Read my post on speedhacking for how-to.
Some weapons and aspects are significantly easier with macros. All suggestions below are for gamepads. You can also adapt this for KBM.
I suggest using Steam Input to reconfigure your gamepad for Hades. Steam automatically "hooks up" to any gamepad, enabling additional functionality: global for desktop control, local for any given game. It allows to remap arbitrary keys, create button chords, macros, and more. Some gamepad manufacturers also provide their own software which can do similar things.
Different macros are suited for different weapons and builds. I recommend creating different profiles for different weapons. In Steam Input, this can be done via Action Set Layers or Action Sets. Configure hotkeys for switching between those.
While held, repeat dash. Suggested interval: 300ms. Can safely replace the normal dash key.
Reduces button wear and tear. Reduces finger strain. Makes dash-attack spam easier and more consistent. Makes general movement easier.
When using the hidden aspect of the fists, I suggest switching to a config without this macro. The aspect has its own built-in auto-dash, with a different cadence.
While held, repeat attack. Suggested interval: 32ms. Can be combined with on the same key. Attack Reload
Useful for the sword, spear, and shield. On the sword, this is like Flurry Slash. On the spear, this is like Flurry Jab. Those hammers are still useful because they also make your attacks faster.
Attack, tiny delay, reload. Suggested delay: 64ms. Can be combined with on the same key. Auto Attack
Useful for Aspect of Hestia, without requiring an additional key if you already have . The delay is chosen to allow one non-empowered shot before the reload, if the weapon was not already empowered. Feel free to reduce it. Empowered shot executes regardless of delay. Auto Attack
See for even more powerful Hestia action. Dash Attack Reload Dash
Dash, tiny delay, attack. Suggested delay: 50ms.
Useful for the bow (any aspect) and the sword (Aspect of Nemesis). Allows to consistently execute dash-attacks with one key. Significantly easier than timing two keys together. On other weapons, I prefer to combine with Auto Dash. Auto Attack
Set dash activation to "start press" rather than "regular press", which allows to dash-out early by hitting the normal dash key.
Attack, tiny delay, dash. Suggested delay: somewhere between 50ms and 200ms, depending on the weapon. Enough to avoid a dash-attack.
Can be useful for Merciful End on weapons where a single dash-attack only applies Doom and fails to trigger Merciful End via Divine Dash. In my experience, the sword tends to behave this way. On weapons where a single dash-attack applies Doom and triggers ME, spamming dash-attacks is better. See and Auto Dash. Auto Attack
Dash, delay, attack, delay, reload, delay, dash. Suggested delay: 50ms between each action. In Steam Input, this means increasing "fire delays": 50ms, 100ms, and so on.
Improved version of , specialized for Aspect of Hestia. Ensures that each shot is a dash-attack, triggering Hyper Sprint for Rush Delivery, and benefitting from dash-attack boons such as Chaos' Lunge and Artemis' Hunter Dash. Ensures that you always cancel the reload animation by dashing, keeping you safe. Attack Reload
The delays in the macro also allow you to turn towards an enemy for the shot, then quickly turn away for the subsequent dash.
]]>Speedhack: one of:
Slowhack: inverse of speedhack. Typically makes the game as a whole run slower.
I recommend this for single-player games only. Do not cheat in multi-player games. Don't be a nuisance to your partners/opponents.
Many games are generally sluggish, or have sluggish segments such as unskippable cutscenes, forced walking sequences, and more. Whenever you feel bored and wish you could fast-forward, speedhacks are for you.
Some games, for some players, can be so challenging that the player is unable to progress without cheats. Slowhacking can make this easier in a way that doesn't circumvent any mechanics, merely compensating for reaction time. Slowhacking can be useful for learning and training, before upgrading to "proper" speed.
There are probably many different ways. At the time of writing, I use and recommend Cheat Engine. Official site: https://cheatengine.org. CE runs as its own app that "attaches" to another app, such as a game, to perform arbitrary hacks on it. CE has many features. Speedhacking is just one of them. I suggest reading CE docs/guides.
Kill Cheat Engine before launching any multi-player game, otherwise you might get banned from it. Many multi-player games have their own "cheat detection" which can produce false positives.
For convenience, you want CE global hotkeys, which can be used without alt-tabbing. At the minimum, I suggest the following:
numpad 0
).numpad 1
).numpad 2
).numpad 3
).numpad 4
).numpad 5
).numpad 6
).numpad +
).numpad -
).After launching both CE and the target game (or another app you want to hack), hit the key to "attach to current process", then use the appropriate speed keys, and enjoy.
This section is out of scope for speedhacking, and might eventually be expanded into its own post.
When using a gamepad, it can be inconvenient to reach for the keyboard to use CE hotkeys, or other global hotkeys. This is fixable by emulating keyboard keys on the gamepad, for example via Steam Input or DS4Windows.
Compared to KBM, gamepads have very few keys. You can't spare them for additional global hotkeys. However, you can find combinations which are normally unused. For example, pressing or tilting the right analog stick and simultaneously pressing one of the face buttons. Or similar on the left side. Such combinations never occur in normal gameplay because a thumb can't be in two places at once. In Steam Input, pressing any button, or tilting an analog stick, or performing another action of your choice, can overlay a different configuration (called "action set layer") over existing keys, allowing you to configure a large number of additional actions, some of which can be used for CE and speedhacking.
]]>The in-universe events happen in this order, and should be read and played in the same order.
I can highly recommend the Russian version of the book. The original is in Polish, which should be comparable. I haven't tried the English version, or "Season of Storms" or any later works, and can't vouch for their quality.
Look for an EPUB version and use a decent reader. Avoid unusable formats like PDF.
It's been a long time since I played it, so my ability to provide advice is limited to the following:
Check the game's page on PC Gaming Wiki and follow its recommendations. Also read forth. The rest of this article is all about Witcher 3.
At the time of writing, Witcher 3 has a "classic edition" (CE, version 1.32) and a "next gen" edition (NGE, version 4+). Steam installs NGE by default, but also allows to choose CE. I've played only NGE, and have no opinion on CE. The listed below are all compatible with NGE. Some of them may not exist for CE. mods
After from DX12 to DX11, disable the launcher by adding switching--launcher-skip
to the game's launch flags.
Check the PCGW article to find the location of the user.settings
file for your system and game version. Note that when running DX12, you would be using the file dx12user.settings
, but you should really to DX11 for better performance. When editing this file, you can simply append entries. The game reorders the content automatically. switch
By default, the game has extremely obnoxious videos on the loading screen. Add the following to disable them.
[LoadingScreen/Debug]
DisableVideos=true
Add the following to enable the console. Afterwards, it can be summoned by the ~
key, unless you change the key by editing config files. See tips below. console
[General]
DBGConsoleOn=true
This section may seem daunting, but trust me, modding the game is well worth the effort!
Most mods can be installed and updated via Vortex, the mod manager from Nexus. I've been using Vortex without issues. However, the authors of Witcher 3 mods tend to recommend against Vortex, and in favor of the Witcher 3 Mod Manager, which I haven't tried.
Some mods require manual install. Some require additional tweaks in input.settings
. Some require invoking the Menu Filelist Updater which is listed below. Check each mod's description for setup instructions.
NoWitcherSenseFX
and WitcherSenseDoubleRange
.NoWitcherSenseFX
is particularly important because in addition to making witcher sense more usable, it fixes an annoying NGE bug where witcher sense SFX gradually breaks music.input.settings
to configure a hotkey. I use IK_Mouse5=(Action=AutoLootRadius)
. The hotkey must be placed in multiple sections.input.settings
. Beware: in settings, terms "canter" and "gallop" are reversed!Potentially interesting mods that I haven't tried:
When starting a new game, either import a Witcher 2 save (if you've made the right choices), or choose "do simulate Witcher 2 save". If you choose the latter, at some point in Witcher 3 you will be questioned to determine various Witcher 2 choices that have an effect in Witcher 3.
Disable tutorial popups ASAP. You'll learn the game just fine without them, and I personally found them extremely obnoxious. They also break the tourney horse race in B&W.
Ignore damage numbers in skill tooltips. Damage of Signs scales with enemy health (current health for Aard, maximum health for other Signs). Tooltips always lie. Experiment with everything.
You get less XP for lower-level enemies and quests. My suggestion: don't think about it. You'll end up at roughly the same maximum level regardless of completion order. The developers did this to prevent overleveling, which would make combat too easy and less fun.
As soon as is practical, raise combat and Gwent difficulty to max. Makes things more interesting.
Use the Fleet Footed ability, and learn when to use small dodge (default
Loading screen tips tell you that light armor is best for stamina regen. This is misleading. Medium armor with Griffin School Techiques is far better for stamina regen.
Fighting multiple opponents may feel very different from fighting one. They don't coordinate attacks and may attack from offscreen. This forces you to dodge or parry more often, making combat significantly slower. Learn to enjoy this. Prolonging combat is a positive rather than a negative, because it makes transitions between ambient and combat music less grating on the ears. Many combat tracks in Witcher 3 can be annoying at the start, especially if heard frequently, but eventually pick up and become interesting. Long combat makes the music better.
Initially, Signs are underwhelming. Their effects are weak and stamina regen is slow. However, Signs become extremely strong with high Sign intensity and stamina regen, obtainable later in the game via slottable greater mutagens, HoS enchants, B&W mutations, Grandmaster Griffin gear. A magic-oriented build has been my favorite in multiple New Game+ playthroughs.
See for enabling the console. Also see this outdated but still useful reference: config tweakshttps://commands.gg/witcher3. Some particularly useful commands are highlighted below.
The following command adds almost every Gwent card, in its maximum total obtainable amount:
addgwintcards
Limitations:
additem
with the appropriate item codes, which can be found on the Witcher wiki. See the Gwent article.Shave:
setbeard(0)
Maximum beard:
setbeard(1)
Witcher 1 and Witcher 2 can feel like a slog when backtracking or otherwise running around, with no ingame way to speed that up. Speedhacks are highly recommended. See my post on speedhacking.
]]>This post is about the game Divinity Original Sin 2 by Larian Studios, and how to customize it for maximum enjoyment. Links: on Steam, on GoG.
Like most games, DOS2 requires some unfucking via mods. Unlike most games, it has a very nice selection of mods, both built-in and external.
In-game, these are called "gift bags" and have a dedicated UI section.
Trust me, this is well worth the effort.
Link: https://github.com/NovFR/DoS-2-Savegame-Editor.
Characters run extremely slowly and have extremely long casting animations. Speedhacks are practically required. See my post on speedhacking.
Use fast travel. Setup a hotkey for the waypoints menu.
SAVE A LOT. Keep named manual saves in addition to quick and auto saves.
Don't be shy about disabling music when you find it grating.
Ability to talk to animals is essential. They have amazingly written and voiced dialogs. Start with the Pet Pal talent, or enable the built-in mod ("gift bag") Animal Empathy.
Useful external resources:
Note: avatar dialogs and story choices are different from companion dialogs and story choices.
Link: https://en.wikipedia.org/wiki/Parasyte.
Very competent writing. The writers did their homework on biology and ethics, and it shows. Very well thought out. Deeply thought-provoking. Carefully averts common anime tropes. Not your typical "shonen" anime.
Parasyte does have a few deus ex machina moments and out-of-character scenes. To me they feel forced, like if a meddling executive wanted to make a certain point and had it shoved in without consulting the original author. Two examples come to mind. One is the final scene in Hikari Park, where I feel like both present females and to a lesser extent Shinichi act horrendously out of character, with nonsensical out-of-context lines. Other is the beginning of the last episode, which, while containing some cool ideas, seems forcibly inserted to create a "happy ending" for the more squeamish viewers who can't fully accept what happened up to that point. Otherwise, most of the writing seems extremely solid and well thought.
The series pays careful attention to biological details. It seems that after coming up with the general concepts, the authors gave a lot of thought to the implications of parasite biology, and made them into plot points.
I initially found it implausible that parasites tend to master language and learn immense amounts about the human society, technology, and customs in just a few hours, often having little to no contact with the stuff. But given the parasites' aptitude for mimicry and being one large "sentient muscle", this isn't that implausible. In order to take over a living body without killing it, then maintain and control it, the parasite assimilating the brain must learn and replicate its structure on the fly. This might give them the human's subconscious skills such as language and social customs. They don't seem to receive higher-level stuff like formal knowledge and memories.
The series explicitly points out that non-head parasites have to learn language from scratch. It slightly stretches belief in the beginning, when Migi utters its first words next morning, having never heard those words before. But for the most part its learning speed is handwaved by reading tons of books and encyclopedia articles really fast. This seems to become a habit: throughout the entire series whenever Shinichi and Migi rest at home, Migi is shown reading. This explains many of the differences between Migi and other parasites, and shows how much attention to subtle details went into the writing.
Non-head parasites perform the same on-the-fly mimicry while assimilating non-brain structures such as an arm or jaw/neck/chest. They can mimic skin, hair, and hard structures such as bones, teeth, and blades. More importantly, their tissue can perform the actual function of brains, muscles, and eyes. I wonder if they can morph into "refinery" organs such as digestive tract, liver, kidneys, etc., or if their tissue is not capable of adapting that far. After all, they have no such systems of their own.
One recurring point is that parasite intelligence is proportional to how much of its body is interconnected. Parasites can split their body into parts, which can be capable of thought and speech on their own, but very small parts have so little brainpower, they can't even rejoin the rest on their own. The series uses this for some interesting plot points.
The authors carefully make the point that the parasites start inherently identical, and diverge due to the differences in their maturation environments and other circumstances. The divergence produces a wide gradient from instintive, barely intelligent murderers like A, to highly intelligent scientifically-minded murderers like Reiko, to highly intelligent semi-pacifists with a degree of empathy like Migi and Jaw.
When it comes to fighting, the series carefully avoids "power level" tropes. It's stated and shown several times that parasites are evenly matched in open combat. This divides them from humans which almost always vary in strength, skill, preparedness, exhaustion levels, resolve, and more, which is aptly shown for contrast. Anime shows often rely on power levels: A is stronger than B, so A wins by default. Or conversely, A beating B establishes a linear power ladder with transitive relations. With parasites this is averted. To prevail, a parasite must do something different, like ganging up, taking advantage of the environment, using its host more efficiently, attacking before the target knows friend from foe, or using unusual tactics other parasites don't know how to counter.
The parasites' fighting style reinforces their image of extremely logical creatures focused on self-preservation. Despite the all-out flashing blades, they block every incoming attack, always prioritizing defense over offense, fitting their nature as a truly solitary lifeform that can't afford dying. This stays true even when the parasite is attacking out of instintive fear and aggression, which might unbalance a human and make them reckless. Humans tend to leave openings during a fight, both in real life and in the series, fitting our nature as a collective lifeform which can afford to lose individuals.
I'm impressed by how the series contrasts the ethical views of humans and parasites. The views tend to mirror their biology. The human empathy and modern humanistic morals are a product of our inter-dependence. At some point a character remarks that humanity is a single collective lifeform that consists of millions of individuals. In contrast, the parasites are solitary lifeforms with no reproductive ability. Note that such an organism is evolutionarily implausible, indicating a possible artificial origin. Regardless, for them it makes perfect biological sense to only care about self-preservation. Their psychology and ethics tend to reflect this perfectly. Migi reiterates many times that it lacks empathy.
I have a general impression that most people who grew up in a modern highly developed country, have lived comfortable lives, and received a good education, tend to have humanistic morals like "all sentient life is precious" which we mostly owe to the Renaissance. During the late 20th century, these morals have developed to include ideals like "everyone is created equal" and "everyone should have equal opportunities". Can't really speak for others, but for a really long time I have ascribed these morals to common sense and intelligence. I have no doubt that plenty of highly intelligent humans don't share them, but humans are faulty and our intelligence is narrow. It always seemed obvious to me that if we create an artificial super-intelligence whose only base motivation is survival, if truly super-intelligent compared to humans, it would see value in friendship and cooperation and would consider it the greediest, most profitable long-term strategy as opposed to isolation or genocide. The 20th century seems to demonstrate this well: when trading replaces war, each economy seems to benefit. I have always assumed that humanistic morals stem from the laws of the universe rather from human idiosyncrazies, and would be universal among sufficiently-intelligent lifeforms. This might be a common fallacy known as projection: ascribing your own traits to others; in this case assuming it's your views that are universal. Regardless of reasoning, I expect many other viewers to have the same feeling about humanism.
For contrast, Parasyte gives us highly intelligent creatures, some well educated in human ethics and evolutionary biology, who have clearly given the topic a lot of thought and don't share these humanistic morals. They know that others are sentient just like them, and have no qualms about killing, neither humans nor their own kind. This reminds a modern comfort-coddled viewer that intelligence doesn't come hand-in-hand with empathy and humanism. The series further emphasizes this by contrasting: regular humans with humanist views, parasites who murder without a second thought, hooligan humans who bully others, a human who takes a parasite's worldview, a human who's a cruel serial killer, and eventually parasites with humanist tendencies. This reminds us that while biology greatly influences ethics, there will always be deviants. We can't simply say "human = good", "monster = bad". Who's the real monster?
From Migi and Jaw we know that parasites survive just fine without cannibalism. Shinichi and Migi explicitly tell this to Reiko. From Reiko we know the reason for cannibalism: head parasites receive a powerful directive "devour this species", where "this species" is what they just took over, whether human or dog. Reiko submits to the cannibal hunger but eventually develops respect for sentient life, with humanist tendencies. This receives an interesting development in the ending. Many parasites get slaughtered by human forces, and the remainder survive because they learn to avoid murder. This makes a subtle point that even for a species that starts as solitary cannibals, the kind least predisposed to peaceful coexistence, survival eventually demands coexistence and cooperation. Coexistence "wins" because groups are stronger than individuals. The collective lifeform of humanity dominates over the solitary and scarce parasitic lifeforms, imposing its policy of peace, and the remaining parasites must coexist and contribute, or be exterminated just like dangerous human deviants. As stated several paragraphs above, to my naive eyes this seems like a law of the universe that's unlikely to be overturned even by superior physiology.
Head parasites have to spend some of their brainpower on body maintenance, controlling the vital organs. Consider that bigger animals have bigger brains. Compared to humans, elephants and whales have much bigger and heavier brains despite much less intelligence. This indicates that body maintenance takes a significant amount of brainpower. Now consider that Migi doesn't have this handicap, and gets to spend it full brainpower on thinking and learning. Because of this, it starts off more intelligent than most parasites. It also gets smarter faster because it never stops learning. Whenever they're at home, Migi is always reading books or science articles.
More interestingly, Migi's views of inter-species relations differ from other parasites because it doesn't have their cannibal hunger. They easily murder defenseless humans, and lack empathy towards intelligent creatures. As a result, they see humans as mere prey and inferior species. In contrast, Migi gets constantly lectured by Shinichi about the value of human life, spends more time studying and thinking, and undergoes minor physiological changes. The series gives us good reasons for why its views eventually diverge.
Migi doesn't seem to share the prey-predator instincts of other parasites. Others react to Migi instinctively, displaying a combination of fear and killing intent, while Migi has no such reaction and uses violence only in self-defense.
Many tragedies happen around Shinichi because of Migi's mere presence. The first school massacre by A, the second school massacre by Shimada, Kana's death, the forest murders in the rural area where Shinichi spends a week after the fight with Gotou, and probably more. He always wants to rectify the situation, to clean up after himself, and does what he can, but it's always not enough or too late. Despite his best efforts, his and Migi's mere presence costs other people their lives or traumatic experiences. Sometimes it's directly his fault. In episode 15 in an underground parking place he causes a girl's death by telling her to get away from a "parasite"... which hadn't revealed itself yet, and which kills her first for being a witness. Migi also catalyzes the tragedies. While Migi doesn't kill any pure humans throughout the series, it actively tries and comes very close a few times. Several times it suggests confronting hostile parasites in a human crowd, using them as a "meat shield". This forms a nice contrast with Migi's civilized speech and care for Shinichi, emphasizing its lack of empathy and care for human lives.
We don't observe Satomi's perspective much. All we know is that she notices Shinichi's changes and has trouble accepting them. In retrospect, it seems likely that she realized more than she lets on. The same applies to Tachikawa (girl with glasses who uncovers Shimada), who has proven to be very observant, but Satomi gets many more chances. Shinichi performs superhuman athletic feats in her presence. He often talks to the right hand in public, sometimes in class, sometimes alone with Satomi while closely observed by her. He also accidentally gives her all kinds of clues. The first time we see Satomi, Migi gropes her breast and Shinichi claims it acted on its own. His right hand is unscratched after a beat-up by hooligans, even though his face and left hand are all bruised. On a date, Shinichi says something Migi-like, ascribes this to a "friend", looks at his right hand, and mumbles that said "friend" is not exactly a "person"; Satomi asks if said "friend" is the reason he's changed. He talks to his hand on several occasions in her presence, and tends to immediately run away, usually to deal with a nearby parasite. Satomi would have to be monumentally dense to miss those clues. We also know that she occasionally stalks and observes Shinichi. The time when he threw a dead puppy in trash, then changed his mind and buried it; we later learn that she saw that. She also trails him in Hikari Park. It stands to reason that she stalked him a few more times, maybe saw him talk to Migi, maybe saw Migi's transformations. She definitely should have seen Migi at the end of the last episode, where Migi breaks its secrecy policy to catch her, and she acts as if nothing happened and keeps quiet about it. She probably starts suspecting Migi's existence quite early, getting more and more confirmations throughout the series. This feels like a nice "rewatch bonus" for a thoughtful viewer.
Reiko wonders about the meaning or reason behind the parasites' existence, just like humans have wondered about our own for millenia, until evolutionary biology came along with a simple tautological explanation. Unlike humans, parasites seem evolutionarily implausible, therefore must have a creator, either human or non-human intelligence. Reiko is right to wonder. This is left intentionally unexplored and gives the viewers something interesting to ponder.
As an appetizer, the series features a strawman view by major Takeshi: parasites exist to cull the humans' exponentially growing numbers and should be valued as predators that keep us in check, saving the global ecosystem. As such, they could have been created by humans themselves. The narration alludes to this with the line "Someone had a thought: life on Earth must be protected". This could very well be a strawman, but doesn't contradict the events of the series. Parasites have many properties you would expect from such a weapon. Parasite larvas target almost exclusively humans. Parasites can't reproduce, which prevents them from spreading like a plague and exterminating their prey; the creators can gradually increase their count until they're killing humans at just the "right" rate. Their physiology and biochemistry is amazingly compatible with ours. Their intelligence, learning rate, mimicry, perfect adaptations for replacing the host and blending into the society, put them into a good position to kill more; compared to skulking in the shadows and hoping for good luck, it's much easier to walk around the streets and make your good luck. Whether or not this actually works to reduce the human population doesn't matter; someone could be crazy enough to try. Seems ironic that by the end of the series the remaining parasites have to become, for all intents and purposes, "human" to survive.
Conclusion: watch Parasyte. Do it slowly, taking the time to think.
]]>"By psycho, of psychos, for psychos"
Evangelion requires special preparation for full enjoyment.
The most important thing to know is that Evangelion is not about meka action and not about optimistically beating the odds. It's about emotional turmoil, about an unshielded psyche suffering on contact with the world, conveyed through a unique narrative device of artificially stripping most characters of their social mask, their emotional shield. The story merely provides context, and the meka action theme is just a wrapper to attract viewers. Psychic turmoil and captiring the failings of the human psyche that we usually fail to notice, or hide from ourselves and each other, is what the author is really about.
Almost all important characters are socially maladjusted and display traits of various psychoses. Some appear emotionally healthy, then easily break under pressure. This can feel unrealistic and galling to a viewer with a healthy social circle. I recommend to interpret this as a narrative device. Characters are artificially "unmasked", stripped of their social interface, the critic, the "super-ego" that dictates social behavior. Instead of showing a character's psyche and inner turmoil separately from their actions, Evangelion tends to show it through their actions, often unrealistic for a normal, socially adjusted human.
Character development inverts your expectations. Typical expectation is that characters progressively get more skillful, competent, powerful, and emotionally stable. In Evangelion, characters get progressively more psychotic and emotionally decrepit. When they get to know each other, instead of forming bonds of friendship and love, they become more wary and afraid of each other. The show explicitly points out how humans need each other for emotional comfort, but also run the risk of hurting each other due to carelessness and differences, and has no shortage of examples.
Many bizarre and psychotic actions can only be understood by relating them to your own emotional experiences. Figuring them out can be a lot of fun. One can view Evangelion as a psychedelic puzzle book. It captures various failings of the human psyche and asks you to recognize them in your own feelings and experiences. It offers you a chance to empathize with failings we often keep hidden under the social interface, which are broadly on display here.
The main protagonist is the most useless, cowardly wimp. Evangelion inverts the usual expectation of the hero growing stronger to beat the ever-greater odds, as the character only gets more pathetic as the plot goes on. The show even toys with our expectations by pretending that the character gets over his troubles, only to snap him again, several times. Evangelion seems to make a special point of building the most guilt-ridden, unwilling, passive "hero" imaginable and dragging him, often literally, into responsibility over the lives of others, complete with the consequences. I haven't been able to understand this "point" yet, neither logically nor emotionally.
The original series fails to conclude the plot. The last two episodes leave it mysterious, open to speculation and interpretation, and focus exclusively on inner psyche. This can be enjoyable if the viewer is prepared in advance. Otherwise, it can be frustrating. The actual conclusion is "End of Evangelion", a "movie" released many years after, that continues directly from the third-last episode and concludes the "real world" action, with a healthy dose of psychic puzzles.
Many important details are only briefly alluded to, and need to be deciphered. The series rewards watching carefully, paying attention to details, thinking back, and thinking ahead. There's plenty of fun to be had by thinking about the implications of many plot details, events, technologies, and more, that are left unexplored on-screen. The show on the screen is like a compressed archive that can be decompressed in your head into a greater sum total of information, to get the most out of it.
]]>Very good. Solid sequel. Worthy successor. Got patched, addressing many release problems such as faces. Requires additional to unfuck. mods
Multiplayer is non-functional. This article is about single player only.
Competently done but lack charisma compared to ME1/2/3.
Enjoyed squad banter. Would prefer it not limited to Nomad.
Subjective grades: click to expand
Credits, resources, gear progress, skill progress can be tiresome, especially on the first playthrough. Consider using CheatEngine. Credits and resources can be easily found as 4-byte integers. XP and skills require mods, see below.
Use mods to unfuck the game. Requires Frosty Mod Manager. My mod list:
This post is informed by many years of Go, and months of Go with exceptions. I am well aware of many arguments for error values. Some of them are addressed below.
Reddit discussion: https://www.reddit.com/r/golang/comments/r2h31i/shorten_your_go_code_by_using_exceptions/
Update 2023-10-23. The original version of this post referred to https://github.com/mitranim/try. The updated post refers to https://github.com/mitranim/gg, which subsumes the previous library and offers more features.
"Go doesn't have exceptions".
Go has panics, which are exceptions.
"Errors-as-values is simpler than exceptions".
Decent argument that doesn't apply to Go. Go already has both. We don't get to choose to use just one.
"All errors are in function signatures".
The stdlib has many documented panics. New releases frequently add more. Panics are not in function signatures.
"Panics are reserved for unrecoverable errors".
Untrue in Go. Panics are recoverable and actionable. For example, HTTP servers respond with 500 and error details instead of crashing.
"Explicit errors lead to more reliable code."
Decent argument that doesn't apply to Go. Go has panics. Reliable code must handle panics in addition to error values. Code that assumes "no panics" or "panics always crash the process" will have leaks, data corruption, and other unexpected states.
"Panics are expensive".
Panics are cheap. Stack traces have a minor cost.
err
variables.Combination of defer
panic
recover
allows terse and flexible exception handling.
Brevity:
import "github.com/mitranim/gg" func outer() { defer gg.Detail(`failed to do X`) someFunc() anotherFunc() moreFunc() }
Same without panics:
func outer() (err error) { defer ErrWrapf(&err, `failed to do X`) err = someFunc() if err != nil { return } err = anotherFunc() if err != nil { return } err = moreFunc() if err != nil { return } return } // Suboptimal implementation, only for example purposes. func ErrWrapf(out *error, pat string, msg ...any) { if out != nil && *out != nil { *out = fmt.Errorf(fmt.Sprintf(pat, msg...)+`: %w`, *out) } }
In modern Go (1.17 and higher), there is barely any difference. Defer/panic/recover is usable even in CPU-heavy hotspot code.
Generating stack traces has a far larger cost. The examples in this post use github.com/mitranim/gg
which automatically adds stack traces. If you're using stack traces with error values, that cost is already dominant, compared to the cost of defer/panic/recover.
Stack traces are essential to debugging, with or without exceptions.
Some real Go code, written by experienced developers, has errors annotated with function names, like this:
func someFunc() error { err := anotherFunc() if err != nil { return fmt.Errorf(`someFunc: anotherFunc: %w`, err) } err = moreFunc() if err != nil { return fmt.Errorf(`someFunc: moreFunc: %w`, err) } return nil }
You can simplify this with defer
, as shown above:
func someFunc() (err error) { defer ErrWrapf(&err, `someFunc`) err = anotherFunc() if err != nil { return } err = moreFunc() if err != nil { return } return } func anotherFunc() (err error) { defer ErrWrapf(&err, `anotherFunc`) return someErroringOperation() } func moreFunc() (err error) { defer ErrWrapf(&err, `moreFunc`) return anotherErroringOperation() } // Suboptimal implementation, only for example purposes. func ErrWrapf(out *error, pat string, msg ...any) { if out != nil && *out != nil { *out = fmt.Errorf(fmt.Sprintf(pat, msg...)+`: %w`, *out) } }
🔔 Alarm bells should be ringing in your head. This emulates a stack trace, doing manually what other languages have automated decades ago.
So stop doing that. Automate your stack traces, and shorten your code:
import "github.com/mitranim/gg" func someFunc() { defer gg.Detail(`failed to do X`) anotherFunc() moreFunc() } func anotherFunc() { gg.Try(someErroringOperation()) } func moreFunc() { gg.Try(anothrErroringOperation()) }]]>
TLDR: always spaces, never tabs; 2 spaces rather than 4.
Objective arguments in favor of spaces over tabs:
Objective arguments in favor of tabs:
Objective arguments in favor of 2 spaces over 4 spaces:
Objective arguments in favor of 2 spaces over 1 space:
Your preference is influenced by your display pixel density, resolution, OS, font family, font size, eyesight, and habits. Someone with a very large but low-DPI display is likely to prefer 4 spaces. Someone who writes code on a small display, in an IDE that uses 20% of the screen area for the actual code, is likely to prefer 2 spaces.
If you don't have a strong preference, 2 spaces seems like a better default, based on the arguments above.
]]>TLDR: nobody wants to write pure S-expressions, and Lisps are full of hacks around them.
Disclaimer: Lisps have decades of history and many dialects with a variety of hacks. The following is just what I happened to come across. There might be more.
Examples on this page use Racket.
S-expressions is a syntax for binary trees. The base notation has only atoms, pairs, and nil:
symbol | atom
"string" | atom
10 | atom
(10 . 20) | pair
() | nil
The "abbreviated" notation omits .
from pairs that end with another pair or nil, combining them into lists:
(10) -> (10 . ())
(10 20) -> (10 . (20 . ()))
(10 20 30) -> (10 . (20 . (30 . ())))
(10 20 . 30) -> (10 . (20 . 30))
When talking about S-expressions as code, we usually mean the abbreviated notation, as in Lisps. Writing code in the base notation is out of the question, but pairs will come back to haunt us later. Example Lisp code:
(define add (lambda (a b) (+ a b))) (define some_var (add 10 20))
We can express new concepts by adding meaning to symbols such as lambda
, if
, and so on. Each such "form" will have its internal "syntax", usually extremely simple, but we don't have to change the base notation. The cost of adding and learning new features is lower compared to other syntaxes. This also makes it easy to give users the ability to extend it, via AST-based macros.
Sidenote. Personally I like the S-expression syntax, but advocate against dynamic typing and homoiconity as seen in Lisps. We could and should use S-expressions for statically typed languages.
S-expressions require unary negation to be written like this:
(- num)
(- 10)
But -10
was too hard to give up, so they built +-
into number literals. The language's parser supports +10
-10
where the operator is part of the number's syntax. Note that + 10
- 10
(with a space) don't work that way. Of course, this limited special case works only for literal numbers, not variables, and doesn't extend to other unary operators such as bitwise negation.
Despite claiming the opposite, Lisps have always had many prefix operators, not just -10
.
Lisps have a concept of "quoting" code. Because the code notation happens to be a data notation, the quoted code can be evaluated as data. This also serves as the language's AST, used internally.
; Evaluate as code, result is `30` (add 10 20) ; Evaluate as data, result is `(add 10 20)` (quote (add 10 20))
Writing (quote)
and others was too much, so they added prefix shortcuts.
'(add 10 20) -> (quote (add 10 20)) `(add 10 20) -> (quasiquote (add 10 20)) `(add 10 ,expr) -> (quasiquote (add 10 (unquote expr))) `(add ,@exprs) -> (quasiquote (add (unquote-splicing exprs)))
In general, all Lisp prefix operators are aliases for "expanded" forms. They're converted after or during parsing text into AST. Parsing text and converting prefix operators is combined into a step called "reading", which returns a canonical AST.
Clojure's reader has more prefix operators, such as @A
→ (deref A)
, and a somewhat-generalized #
.
Upside: because this is done once at "read time", no other code has to deal with prefix operators. Downside: standard library and user code either can't define new prefix operators, or must use an API different from functions and macros.
People have written large documents and reference implementations suggesting {}
for infix. See SRFI 105. Code inside {}
would be implicitly and unambiguously converted to the canonical form by the reader.
{10 + 20 + 30} -> (+ 10 20 30)
{{10 + 20} * 30} -> (* (+ 10 20) 30)
Veiled in-joke or serious request? Can't tell...
It can be observed that this proposal has grouping, but no precedence. Grouping is both necessary and sufficient. Precedence is not necessary and not sufficient. Programming languages have lots of operators that don't exist in math, and their precedence is inconsistent between languages. Precedence errors are so insidious that some languages, like Pony, ban most forms of operator mixing and enforce grouping. This proposal, while ludicrous in the context of Lisp, has at least one good idea at its core.
Racket has a special infix hack.
Remember the unabbreviated (a . b)
syntax for pairs? Racket folks have found unused "dead space" in the syntax they could exploit. In addition to binary (a . b)
which makes a pair, it supports ternary (a . b . c)
which makes a reordered list. They use one infix operator to enable other infix operators or functions in a "general" way.
(10 . + . 20) -> (+ 10 20)
((10 . + . 20) . * . 30) -> (* (+ 10 20) 30)
It's often said that forbidden fruit is desired more strongly. Evidence suggests that when Lisp bereaves its users of infix, they develop a strong desire for more, more infix! (We herd you like infix, so we put more infix in your infix...)
Most languages have some form of namespacing. Some mix several forms.
one.two.three
one->two->three
one:two:three
one::two::three
one/two.three
Since inception, Lisps have allowed special characters inside symbols, and avoided infix operators. It naturally followed that Lisp package systems implement namespacing inside symbols. Common Lisp and Racket use :
, Clojure uses /
and .
.
package:identifier
namespace/identifier
value.method
Still a hack, because useful applications of these symbols involve sub-parsing them. Conceptually, these are separate identifiers combined by an infix operator. The parser (or "reader") should have parsed them for you, storing the pieces in the AST. That's what Clojure does: its symbols are classes with separate "namespace" and "name" parts.
Sidenote. One simple alternative is to extend "reader macros" by supporting infix :
, converting one:two:three
to canonical :(one two three)
. Lisps already special-case .
in a similar way; :
would have a higher precedence. As long as there's no other infix, this should parse unambiguously. Alternatively, we could ditch the pair syntax and use .
for namespacing. Improper pairs could be printed as (cons a b)
.
The major downside of the solution above, aside from added complexity, is that it's non-extensible, as adding more infix would create parsing ambiguities, which we can't resolve because we can't afford ()
for grouping. I would appreciate a simple and flexible approach that doesn't seem hacky.
If Lisp people haven't been able to stick with pure S-expressions, nobody will. Languages designed for practical use must include common prefix and infix shortcuts. To me, everything above seems hacky or complicated. Elegant approaches are topics for other posts.
]]>TLDR: variadic -
, as seen in Lisps, has gotchas; it may be allowed syntactically, but not as a variadic function.
-
tends to be overloaded with two different operations: negation and subtraction. Negation is always unary. Subtraction can be variadic. Unary subtraction is an identity function that returns the first argument unchanged without negating it.
ƒ negate(a) = 0 - a
ƒ subtract(a) = a
ƒ subtract(a b) = a - b
ƒ subtract(a b c) = (a - b) - c
ƒ subtract(a b c d) = ((a - b) - c) - d
In math and many programming languages, there's no ambiguity because -
is either unary prefix (negation) or binary infix (subtraction):
-A | Negation.
B - C | Subtraction.
But in Lisps, -
is always prefix, always variadic, and when called with a single argument, it always negates it.
The following examples use Racket. Let's dynamically pass N arguments to -
:
#lang racket/base (define (subtract . args) (apply - args)) (println (subtract 11 33 55)) (println (subtract 11 33)) (println (subtract 11))
-77 -22 -11 ; Performed negation, not subtraction!
The last call performed negation on its only argument.
Correct variadic subtraction:
#lang racket/base (define (flip fun) (lambda (a b) (fun b a))) (define (foldl1 fun seq) (foldl fun (car seq) (cdr seq))) (define (subtract . args) (foldl1 (flip -) args)) (println (subtract 11 33 55)) (println (subtract 11 33)) (println (subtract 11))
-77
-22
11
Now, 11
was correctly returned as-is.
Worth comparing to Haskell, which also generalizes operators into functions, but handles -
differently. In Haskell, the function -
is always binary subtraction:
main = do print (foldl1 (-) [11, 33, 55]) print (foldl1 (-) [11, 33]) print (foldl1 (-) [11])
-77
-22
11
Haskell doesn't allow to overload functions on parameter count. You can't define -
as both unary and binary. So they special-cased unary -
in the syntax, converting it to negate
:
main = do print (-11) print (negate 11)
-11 -11
Lisp and Haskell create this problem for themselves by treating -
as a function while overloading it with two different functions. Most languages don't have this problem because they don't have -
as a function. Languages with operator overloading tend to differentiate between negation and subtraction. For example, Rust has ops::Neg
and ops::Sub
. Literal -
is converted into calls to one of those. When passing it to a higher-order function, you either pass ops::Neg::neg
, or ops::Sub::sub
, avoiding the problem completely.
TLDR: Identifiers in programming languages should use only snake_case
, Title_snake_case
, UPPER_SNAKE_CASE
, ignore abbreviations, and be limited to ASCII alphanumerics with _
.
This post will also touch on the structure of identifiers.
There was an earlier, more specialized post: Don't Abbreviate In Camel-Case. This one is more general.
Objective arguments in favor of snake_case
over camelCase
:
_
to type without Shift. (I did.)Conversion:
one_123_two <-> one 123 two
one123two <-> one123two
one123two <-> one123 two
one123two <-> one 123 two
one123Two <-> one123 two
one123Two <-> one 123 two
Objective arguments in favor of Title_snake_case
over TitleCamelCase
:
XML_HTTP_request
. No schizophrenia such as XMLHttpRequest
._
to type without Shift. Titled identifiers require only one Shift press. (I did.)snake_case
in a language that uses it for lowercase identifiers.Conversion:
One_123_two <-> one 123 two
One123two <-> one123two
One123two <-> one123 two
One123two <-> one 123 two
One123Two <-> one123 two
One123Two <-> one 123 two
Objective arguments in favor of avoiding abbreviations, for example Json_encoder
over JSON_encoder
, or JsonEncoder
over JSONEncoder
:
XMLHttpRequest
.Example from work.
At some point I had contact with a code base involving generating Go code from Swagger. The generator had a variety of special cases for id
, xml
, and some other abbreviations. A field named xml_setting_id
would become XMLSettingID
. However, if you used an abbreviation unknown to the generator, for example XSD (XML Schema Definition), xsd_setting_id
would become XsdSettingID
.
The goal was noble: be consistent with the Go standard library, which stupidly uses abbreviations, for example MarshalXML
. But unlike the standard library, you couldn't just remember "abbreviations are uppercase", your brain needed the database of the exact abbreviations special-cased in that generator. So don't. Don't use abbreviations in identifiers, and don't special-case them in code generators or parsers.
Objective arguments in favor of restricting identifiers to ASCII alphanumerics with _
:
Example from work.
At some point, we at Purelab were using Clojure and Datomic to build apps. Clojure symbols (Lisp equivalent of identifiers) use kebab-case
and may contain operator characters such as -?
. Booleans are expected to end with a question: hidden?
instead of is_hidden
.
Datomic has its own idiosyncrasy: column names are globally scoped and include the entity type. So, instead of this:
create table persons (is_email_verified bool not null default false);
...you use this:
{ :db/ident :person/email-verified? :db/valueType :db.type/boolean }
For simplicity, let's suppose we use Postgres, and have a JS client. You have to either break the SQL and JS conventions by quoting the field:
create table persons ("email-verified?" bool not null default false);
person['email-verified?']
...or break the Clojure convention by using the interoperable format:
:is_email_verified
Lisps allow identifiers like email-verified?
because they don't distinguish identifiers and operators, or more generally, alphanumerics and special characters. They just have "symbols". This has various problems.
>>=
? With bind
, you can at least start guessing the purpose, or pronounce it, or google it, what a feat!: / .
in symbols to implement namespacing (Common Lisp, Clojure). This requires re-parsing the symbol, something the AST should have done for you. Clojure symbols are classes with "namespace" and "name" parts, indicating that they were combined prematurely in the symbol type. The AST should separate alphanumerics and operators from the start.When making a language, follow the conventions listed at the top. Let's solve this forever and move on.
]]>TLDR: Homoiconicity simplifies what was already trivial, while leading to poor design choices.
While this post is highly critical, it comes from a fan. I used Clojure for years, dabbled in other Lisps, wrote a few parsers and compilers. Even if this concept is not good language design, it's still pretty cool.
Homoiconicity is when the entirety of a language's syntax matches the literal syntax of some of its data structures.
This does not just mean that we can convert this text:
(10 "20")
Into some library-defined type:
ast.LinkedList{ast.Number{"10"}, ast.String{"20"}}
This means (10 "20")
is the literal syntax for that AST type. In other words, the expression (10 "20")
gives your program a copy of the AST node that the parser generated for this expression when parsing that program. Sometimes with caveats:
; The quote tells the compiler: this list is not a function call. '(10 "20") ; The quote tells the compiler: this symbol should not be evaluated. 'ident
This quality can simplify the language and macros (not by much). It requires the language to be dynamically typed, or have a dynamically typed subset.
Our language probably has identifiers: names for variables, functions, operators, and so on. To distinguish them from strings, we must introduce a new data type: "symbol".
; This unquoted symbol is evaluated as a variable. blah ; This quoted symbol is evaluated as data. 'blah ; Strings are considered distinct from symbols. "blah"
Setting macros aside, from the perspective of data modeling, having symbols is bad. They're just strings by another name, but everyone has to choose between symbols and strings. Library APIs will make different choices and conventions. Using external data formats gets more difficult, because they usually support only strings (see JSON).
It gets crazier. Common Lisp and Clojure have keywords, which are symbols with minute differences and their own syntax. Everyone using those languages must spend time and effort choosing between strings, symbols, and keywords, dealing with idiosyncratic APIs, and dealing with conversions. I know I have.
Side note: some languages with symbol-like data types support interning, which allows to compare them as integers. In dynamic languages, this can be a minor performance hack. Can also be a memory leak. Static languages don't need it. It's not worth it.
We can probably agree that code auto-formatting is great. We can also probably agree that generating documentation from comments is simpler and more universal than special-case support for doc strings. But in any given homoiconic language, comments and whitespace are missing from the AST.
We probably don't want to define a different AST and write a different parser. Which means our "main" AST generated by the parser must preserve comments and whitespace. Since all the other AST types are built-in, this requires built-in types for comments and whitespace. Internally, they would just be strings. Just like with symbols, we've added more string-like types that should be limited to the AST, yet are built-in, easily available, and will be used where they shouldn't be. Or would be, unless...
Homoiconicity seems to require that every data type in the AST is instantiated using the exact same syntax from which it was parsed. So, how do I assign literal whitespace to a variable? How do I assign a comment?
(define whitespace
(define comment ; This doesn't get evaluated!
Comments and whitespace isn't the only information lost. Some data types might have N inputs for 1 output. One example is numbers:
0b110011
0x33
51
All of these would be parsed into just 51
, losing the information about the original formatting. Even if we had preserved comments and whitespace in the AST, we can't print the original code back!
One decent upshot is that it simplifies macros. In Lisps, you can just quote a bit of code, return it from a macro, and it counts as valid AST:
(defun sum (vals) (reduce '+ vals)) (defmacro trivial () '(sum '(10 20 30)))
All that's needed of macros is to return AST nodes. Programmatically manipulating an AST doesn't require special syntactic support. Calling map
or head
/rest
on an AST doesn't care about its text representation. AST types could be defined somewhere in the standard library. Macros would import that module to use its types and functions. Non-trivial macros are already inscrutable, so we're not losing much readability.
(import std:ast) (defmacro trivial () (ast:list (ast:sym "sum") (ast:quote (ast:list (ast:num "10") (ast:num "20") (ast:num "30")))))
But instead, the language could convert quoted code into types from the AST module. So we're back to:
(defmacro trivial () '(sum '(10 20 30)))
What got simplified wasn't your code. It was the implementation of macro support in the language. Meanwhile, you got saddled with unnecessary data types and an inferior AST!
]]>Collection of headcanon for Warframe, co-authored by friends and myself. This post contains massive unmarked spoilers. By the nature of headcanon, this should only be read by someone who's completed all story quests. If you haven't, get out now and return once you have.
Prime warframes and weapons were made exclusively in the Orokin times, while non-primes can be manufactured from scratch right now. Non-primes could be cheap knock-offs of the originals, but they could also be the originals, the prototypes, later refined with superior materials and designs
.The Old War cutscenes feature prime warframes and weapons. However, the Leverian, Chains of Harrow, and the Deadlock Protocol specifically place non-primes in the Orokin era. Both Harrow and Protea, found in their respective quests, are non-primes that have stuck around for hundreds of years.
One sensible explanation is that most warframes and weapons started off as non-prime prototypes. Many weapons are specifically said to be of Tenno design, implying later refinement by the Orokin. Several warframes are attributed directly to prominent Orokin or Archimedians, but there's no reason why they couldn't have iterated on the design. Some weapons, like the Euphona Prime, may have started off as a prime; alternatively, the non-prime prototype blueprints may have been lost.
Infested versions of Corpus and Grineer ships tend to be dark, implying running on reserve power. Reserve power implies reserve reactors. We supposedly sabotage a ship's main reactor, but does that really detonate the entire ship, or kill the crew due to failing life support? One would expect that after so many raids, they would start installing reserve reactors and decoy reactors, possibly in a ship section that can be jettisoned away. It's just more economical compared to losing entire ships.
(Added 2023-04-11.)
However, if Tenno found this out, cover would be blown. They would start going after real reactors. Corpus and Grineer would be back to square 1. To prevent this, it makes sense to explode ships for real if any Tenno are still present when the fake timer runs out. They even encourage this, by using the oh-so-special caches to bait Tenno into staying until the explosion. By sacrificing a few ships, they spread the word, and then the other Tenno are less motivated to stick around and find out what really happens, which is... nothing.
Corpus and Grineer infantry, as well as mass-produced bosses such as the Jackal, seem suicidally brave when facing Tenno.
Think about it. The kill count of an individual Tenno is often somewhere in the millions. Each is a one-man army that can personally genocide an entire nation, and did so. They can't be killed for good and regenerate from any injuries. They're liable to dismember you within seconds of visual contact, or without visual contact, and that's going easy on you. They will hack off your limbs and watch you bleed to death, infest you with techno-bio-parasites, burn you to death, melt your flesh with horrible chemicals, crush you into smooth paste with force fields, mind control you into killing squadmates and friends, and more. And laugh while doing so. And then they come in groups.
They have pets to match. They show up on the Infested Derelict where nobody dares to set foot without an army, looking for cuddly pets: feral kavats, transformed by hundreds of years among the Infestation into an . The kavats get further augmented with Orokin Reactors and mods. They can't be killed for good and regenerate from any injuries. These "pets" are liable to tear you in half and feast on you in the middle of a battlefield. And the Tenno consider them cute and hold fashion contests. animal equivalent of warframes
A sane person's response in the face of this overwhelming threat? Run away! As far as you can, as fast as you can! Tribunal? Punishment for disobeying orders? Probably better than horrible dismemberment right now. Band with your ship's crew and abscond together! Become a rogue faction or join the Perrin Sequence! Try to ally with the Tenno if you can, because you lost the alternatives the moment they showed up.
Meanwhile, what do we see?
The Jackal:
The Raptors:
The Sergeant:
Corpus infantry (translated):
Grineer infantry (translated):
One logical explanation is that the mass-cloned or mass-manufactured units are intentionally kept ignorant about the Tenno threat level. They know about Tenno in general, but in the best traditions of military propaganda, must be led to believe that the Tenno are weak and cowardly (but somehow also responsible for many atrocities).
What about the robots, such as the Jackal and the Raptors? One possibility is that their artificial intelligence is too complicated and lifelike; Corpus couldn't separate the combat data from the emotional trauma caused by the Tenno raids. Or even better, after analyzing the combat data, the onboard intelligence correctly concludes that the probability of victory against Tenno is around 0.0001%, and the most effective combat tactic is to play dead until the Tenno leave, nullifying its combat effectiveness. Leaving Corpus with no choice but to reset the data every time.
On the Orokin Derelict, feral kavats have lived alongside the Infestation for hundreds of years. By now, all organic materials aboard, along with many inorganics, have been converted to Infested biomass. The kavats survive by feeding on it. There have to be repercussions. Their supposed "immunity" is not absolute; they've been gradually altered, fur being replaced by scales, tails and shoulders forming something resembling cysts, and so on. In Ballas's words: "Transformed, but only just".
One wonders if this alteration affects their genetics. The kavats we breed from their genetic material don't have scales, but one can use a gene-masking kit to bring that back, implying it's still there. It's plausible that the feral kavat genetics have been permanently altered by the Infestation, and our incubator alters their genome to replicate the non-Infested appearance... but only just.
My headcanon is that our kavats retain many Infested alterations in both the genotype and the phenotype. Think back on how much damage your kavat has taken over the course of your missions. No normal animal would be able to survive that many wounds. At best, it would have been horribly maimed and out of commission. Also, how exactly do we install Orokin Reactors and mods in them? It logically follows that their incubation involves a degree of modification using the Helminth, giving them the same properties of durability, regeneration, and mod compatibility as our warframes.
Oh and consider the astronomical 120k Alloy Plate spent on the Incubator kavat module. Where exactly did that Alloy Plate go? Perhaps it's being used to reinforce the kavats.
Every Vauban ability involves deploying small gear such as grenades. In contrast, most warframe abilities involve conjuring things out of nothing; think Ember, Frost, Nova, Saryn, and more. Some do have integrated gear, such as exalted weapons and Protea's deployables. However, Vauban's 100% reliance on deployables should raise suspicion. If Corpus were to create a warframe-alike, this is exactly how it would function!
Building Vauban Prime parts requires ludicrous amounts of materials compared to other frames. It's expensive and luxurious, it flaunts wealth, prosperity, and profit. Reinforced by its Codex entry:
"Lust was my sin. But greed is the blight that weakens our steel. These industrialists have gorged on the harvest of our long war. Their mind drones; Their mechanizations, toil in foundries remote. For what purpose? We must set watch upon them. Baiting our snares with the worms of profit.
Those kneeling at the altar of commerce will be returned.. to the Void.
For your consideration... Vauban."
So, what if rather than being a warframe, Vauban combines armor and various weapon systems to allow a regular human to act like one, living out the fantasy? "Hey look guys, I'm totally a warframe!"
A friend suggested that perhaps Vauban is made so expensive in order to bankrupt the Corpus who're baited to manufacture it. (See the Codex entry above.) Alternatively, it's the cost of continuously manufacturing the consumable supplies, unnecessary for most warframes.
Relays have many human-looking Tenno or Tenno associates. They can be relay staff, syndicate members, and Tenno rescue targets which Lotus specifically calls "Tenno operatives". More importantly, they're found in something labeled "warframe cryopods".
The most literal interpretation is that these guys are fully-fledged Tenno, using warframes specifically designed for civil life. Those "peaceframes" appear human, but who knows what's under that visor and body-tight suit? Since they're found in cryopods, it follows that some Tenno wake up in peaceframes rather than warframes like Excalibur. Those guys went on to build relays, research technologies, and provide support to the combat Tenno.
If these guys are remote-controlled like regular warframes, their operators might still be sleeping through the Second Dream. They might even believe themselves to be human!
See the above on Tenno associates and peaceframes. If operators were also peaceframes, this would explain a lot!
In the Second Dream we supposedly awaken as the "real" puppeteer behind the golem. It's not unreasonable to suggest that there's another body behind the Operator, for real this time. But let's keep it simple and suppose the Operator is the "final" body. How could it be a peaceframe?
Option 1: fusion. Chains of Harrow establishes that a Tenno can fuse with a warframe, transferring from the original human body. Perhaps every Tenno transfered to a specially prepared, younger-looking peaceframe.
Option 2: Helminth. Just like Excalibur Umbra, the Tenno may have been infested with the Helminth to become frame-like. Warframes are known to lose their sanity in the process, but for the Tenno we can handwave it through Void magicks.
Our Operator never makes any statements contradicting any of this, but even if they did, their memory can't be trusted anyway!
This was my original headcanon about Tenno before playing the Second Dream.
The Tenno are digital minds, which allows them to body-surf between warframes. Let's assume they're run locally inside warframes, because this has far more interesting implications compared to remote control.
The Orbiter maintains the "master copy" of the Tenno personality data. Each warframe has a computing core capable of running the Tenno mind, but only one is allowed to run at a time. The active warframe continuously uploads new memories to the Orbiter. Upon critical damage, the core self-erases and breaks the uplink; the Orbiter boots up a spare warframe with the latest copy of the Tenno mind. The illusion of "self" is maintained by the continuity of memories.
Warframes lost during missions are looted by the Corpus or Grineer for experiments such as Zanuka. If the core was successfully erased, they get just the hardware. But if something went wrong and the enemy manages to preserve and boot up the core, Valkyr's origin story suddenly makes sense.
This more or less gets busted by the Second Dream. See for an alternate interpretation. operator peaceframes
Basis. Ember's Codex entry: "Why would you put children on a military ship? — We didn't."
Let's run with this!
It's canon that spaceships travel through the Void, and the Void can cause temporal anomalies. What if the Zariman's journey lasted for one or several generations in onboard clock?
The Zariman being headed for Tau (see next hypothesis) implies it was a colony ship, likely with a lot of surplus space, supplies, and a large genetically diverse population inclined to breed. If the journey was taking decades, they would start having children out of boredom or to ensure the mission continues if the original crew dies of old age. Note that "military ship" doesn't mean "not colony ship": the Orokin empire was highly militarized, and if they were going to wage war, they'd send a fleet.
Various Codex entries, as well as remembrances in the War Within and Chains of Harrow, imply that the Tenno were a relatively tight group of similar age, which makes them more likely to be generation 1 rather than N, because breeding times would diverge over multiple generations.
Getting stuck in the Void for one or several decades, with no apparent way out, could demoralize the crew to the point of madness. The synchronized craze doesn't need any special explanation other than mob effects. If the journey lasted for generations, educational and cultural decay could lead to mad suicidal cults. Note that the ship's systems could be run by a Cephalon, which tend to remain stable over hundreds of years; the crew could have lost any ability to operate the ship, kept alive by its digitized butler.
This doesn't invoke any unnecessary magic, and neatly explains why the Orokin hushed down the story: the risk of ships getting lost due to Void anomalies could demoralize the servant populations, even if such occurrences were rare.
One of the ingame materials mentions that the Zariman was headed for Tau, possibly to oversee the Sentients' terraforming efforts and start the human colonization. There are no indications of whether the ship got lost before or after reaching the destination. So let's suppose they did.
Could the crew's craze have something to do with the Sentients? Maybe what they saw on the arrival was so terrifying, so devastating, that they chose to end themselves? Or perhaps the Sentients deployed some kind of psychic weapon?
The Operator seems to have memories of parents on the Zariman, and of killing the ship's adults. But the Operator's memory is untrustworthy; it's been tampered with, has massive omissions, and what's there is extremely vague. It's remotely plausible that the Void reversed their age and messed with their heads, causing them to form false memories.
One possibility is that the entire Zariman crew got turned into children, forming fake memories of the massacre.
Alternatively, one part of the crew became children, while the rest stayed as adults and took care of them. Eventually, either:
Perhaps everything was according to procedure. The Zariman's crew consisted of only human adults, they didn't reproduce, didn't reverse-age. Instead, creatures indistinguishable from human children appeared out of the Void, and the rest is history.
"Void entity" refers to any of:
Chains of Harrow establishes that the Void has an entity associated with it, possessing a human-like mind and personality. This entity seems particularly interested in Tenno operators, visiting their Orbiters and Railjacks to say hello and remind how we "owe" it.
Intuitively, a force of nature permeating the entire universe wouldn't have an mind of its own, particularly not something as small-scale and specific as a human. It logically follows that it originated from humanity.
Perhaps humanity's existence influences the Void, forming an entity or multiple entities that reflect it. Perhaps this is humanity's gestalt. Alternatively, it could be a specific human, similar to the operators but much more "ascended".
This has obvious parallels with some other franchises; I'll let you invoke them yourself.
After completing Chains of Harrow and "freeing" Rell, an unidentified entity, seemingly Void-associated, begins visiting the operators. Palatino and Rell make claims about some "man-in-the-wall" in the Void, which may or nay be the same entity. They claim that Rell was keeping MITW away from the other Tenno. However, MITW and Rell were never seen in the same room together. Further, we know that Rell has the propensity to haunt people when emotionally destabilized, as he did when spurring his Red Veil devoted into murder sprees.
One logical conclusion is that Rell, MITW, and our mysterious visitor are one and the same. We "freed" Rell, now he haunts us. One solid counter-argument is that Rell's autistic personality drastically differs from the visitor's extravagant, gallivanting demeanor. This could be explained by a split personality, where only one half is autistic; a stretch, but not implausible, especially considering Rell no longer has a bio-brain.
Alternatively, we might take their claims at face value. We "freed" Rell, now something other than Rell haunts us. It logically follows that Rell had its attention, and now we've attracted its attention.
Let's say the Void has no "will" or "representative" of its own, but is able to "split"/duplicate people, where one "half"/copy stays in the Void, powering your powers. There is not a single MITW but many. Albrecht Entrati had his own. We have our own. At the end of New War, the MITW we "saw" was merely an illusion created by the limits of what our mind could comprehend. A small facet of the whole.
Known Stalker canon:
Basis. Lotus: "I was trying to protect you from the truth (about the reservoir). This truth drove Stalker mad."
Basis. Hunhow to Stalker: "Do you still hate these abominations? Do you hate... yourself?"
Counter-argument: Lotus is likely to keep tabs on all Tenno in the Reservoir. She would have known about Stalker and would have taken measures to disable him.
Basis. Hunhow to Stalker: "Are you asking yourself: was I one of those wretched things?"
Perhaps the most "normal" explanation listed here.
In The War Within, the Operator recalls some Orokin looking for beautiful young bodies to transfer into, via the Continuity process. Ordis' memories in the Codex contain a scene where the Orokin offer him to become one of them, implying a process involving Kuva.
There's no particular reason to think that the Tenno exterminated all Orokin. The extermination couldn't have been instant. The Empire was vast, and the Tenno were a relatively small elite force. While the Tenno managed to collapse the core of the Empire, it seems likely that large groups of the Orokin would have escaped by hiding in the Void or other solar systems. After all, the name "Origin System" implies that other systems have been settled.
It's very plausible that there are vibrant Orokin societies out there.
See on the Orokin society. Why does Baro trade in Orokin Ducats? Because he trades with the Orokin! His "dangerous Void safari" are probably just trips to the discount Sunday market next door. above
It's no wonder he looks down on non-prime things. Being in the same solar system as non-prime frames and primitive cultures such as the Corpus must be an emotionally traumatic experience for someone attuned to the Orokin bling.
Why does he also trade in Corpus credits? He must be doing business on both sides. He doesn't just sell primes. Some wares are upgraded versions of "modern" weapons or decorations. Some are decorations of the particular Ki'Teer brand. He probably builds them in the Origin System where it's cheaper, possibly renting workforce and manufacturing plants from the Corpus. Take particular note of the Ki'Teer Domestik Drones; they're basically rebranded Corpus Domestik Drones with a specially-decorated hull. The main difference is spying on you for Baro instead of for Corpus.
Basis. The Second Dream, Alad V to Tenno (paraphrasing): "The last time you got close with the Sentients, you destroyed an entire civilization. But you don't remember that, do you?"
The capture targets on the Infested Derelict will scream "I don't want to die!" and "No, not you, not you, not you!..". (All capture targets do, but hear me out.) Some are elites, equipped with a ridiculously powerful Glaxion that can melt anyone in seconds. These guys must be veterans, survivors of Tenno raids on Corpus ships, who requested assignment to the most remote, most dangerous place, in hopes that Tenno won't show up... and then they show up.
Cy has memories of wiping out his ship's crew "to complete the mission" and seems extremely confused about what exactly constituted the mission and why it suddenly required killing the crew.
Obvious logical explanation: the mission was against Sentients; they hacked the Cephalon and overwrote the mission objective with "ensure death of crew by disabling ship systems while blaming enemies". Octavia's quest, dumb as it is, establishes that Sentients can remotely corrupt Cephalons.
Various indications that Cy is either still corrupted or intellectually impeded in some way:
This one is rooted in a particular personal experience. At some point I ran a rescue mission in the Kuva Fortress. Upon opening the first prison cell, smack in the middle of the cell, I found one of my converted Kuva Liches, asking me "Are you trying to get yourself killed?". Technically, this triggered because I died once on the way to the prison, but the timing and positioning of the spawn was impeccable.
Converted Liches need maintenance for their flesh and cybernetics. Being Tenno-aligned outcasts, they can't exactly turn to their brethren for help. They might be making deals with Steel Meridian, like running missions for them in exchange for materials. But alternatively, they might willingly get captured, get free maintenance, and then we bust them out!
Ever notice how they bumrush to trigger alarms? Even while the alarms are already buzzing, enemies will rush to trigger more! Even better if it triggers a lockdown! Even on mission types where alarms are disabled (Capture missions), they gotta mash that button.
The headcanon is that Grineer undergo a special course in alarms. Can't rely on own strength? There's always backup! Some Grineer take an elective in Corpus tech, just to use alarms on the Corpus ships they raid.
This got retconned at some point, but the Orbiter used to be called "landing craft", and prior to that, simply "Liset". Your landing craft is clearly smaller on the outside than the inside of your ship. Now that the Orbiter is supposed to be a separate ship, we don't need this explanation, but the Railjack brings it back. Hop around your Railjack in a dojo; the outside is clearly smaller.
This isn't limited to our ships. During any Railjack mission, deploy into archwing and fly around any ship or space station. Most of them are much smaller on the outside. This includes Railjack, Grineer Crewships, possibly Grineer space stations, boardable Orokin Towers, Murexes. Grineer Galleons might be an exception.
The Sungem skin modifies the craft so much that you'd expect the interior to change, but it doesn't. Furthermore, Railjack skins sometimes work and sometimes don't. For example, when looking at the RJ from inside the Orbiter, it uses the default skin. One natural conclusion is that the skin is a holo-projection which doesn't affect the interior and is not always turned on.
The Lotus' involvement in your missions is too personal, too low-level for a commander of an entire faction. Even before Natah / The Second Dream / The War Within, I assumed that Lotus is a machine, an AI powerful enough to simultaneously oversee all Tenno missions at once. While the canon doesn't explicitly confirm the "all at once" part, the "machine mind" part is conveniently confirmed.
Eidolon lures are able to consume Vomvalysts and weaken the big eidolons. This kind of tech seems a bit too advanced for the Grineer. It seems a bit... convenient that it only exists on Plains, where the Quills operate.
Pretty much what the name "murmur" implies. We already have access to Grineer comms, you can listen to them in your Orbiter, but filtering the information relevant to your particular Lich could be difficult. Thralls act as a lead, letting us find the relevant comms that reveal useful details about the Lich.
Basis: the legends of Gara, Protea, Inaros; the Sacrifice (quest).
In each of these legends:
It's plausible that among the warframes made from humans using the Helminth process, some had regained their minds, or never completely lost them in the first place. When the Tenno went into stasis, they stuck around. Gara was spending time with Unum, Inaros was tracking down the remaining Orokin survivors and finishing the Tenno's work, Protea was with Parvos in the Granum Void, and so on.
At the end of Rescue missions, we rush to the landing craft and fly away, as the hostage just stands there in the extraction zone. If we really wanted to extract them, we'd put them in a pokeball, like Capture targets. Of course they get recaptured. That's why you can rerun the mission!
After the War Within, Teshin could have put together a perfectly-functional Kuva Scepter. All he needs is to run a few Synthesis missions, get Simaris standing, buy the Broken Scepter blueprint, grab some Kuva from his endless stash (see Steel Path rewards), and stick it on the Broken Scepter.
Like any Dax, Teshin is hardwired to obey anyone who wields a Kuva Scepter. So, what happens if a Dax wields it themselves? This should give Teshin the perk "Iron Will", a perfect self-geas, a self-command you can't refuse.
Inaros' sand theme doesn't make much sense among high-tech war machines made of living metal. Its visual appearance also doesn't convey particularly high durability; the deluxe skin improves on that, but doesn't look particularly tougher than, say, Chroma or deluxe Frost.
But what if Inaros is made of fast-moving, fast-replicating nano- or micro-bots? This simultaneously explains the high health pool (no vulnerable organs) and the ease of transfering that health around.
In the Corrupted faction, all units are named "Corrupted X", except for these drones. Now consider, how exactly does an Orokin Tower's "neural sentry", likely a Cephalon, maintain the towers? How does it corrupt those who visit its domain? How does it maintain the state of corruption?
It must have tools, mobile drones acting as its eyes and arms. It must have indoctrination devices, numerous and mobile. When deploying squads of Corrupted into remote areas, those eyes, arms, and indoctrination devices must be deployed with them to relay the combat situation, orders, and ensure continuous loyalty. It stands to reason that these drones are responsible for it.
(Added 2023-04-11.)
Tenno only ever speak in human form. We never hear or see warframes actually speaking. In Leverian legends, warframes are always silent. We can conclude that they never speak. So how do they communicate? Tenno may be considered telepaths, but they seem to use telepathy only for transference, and on one confusing occasion for Teshin. So either they have some kinda text holo-projectors that we've never heard about, or they use signs and gestures.
(Added 2023-04-11.)
Various sources, such as the Sacrifice, tell us that warframes are made by infecting humans with the Helminth Infestation strain, transforming them, and near-erasing their minds. Tenno routinely manufacture new warframes, often to immediately feed them back to Helminth. They require a constant supply of fresh victims. The ones most conveniently available on hand are the capture targets.
(Added 2023-04-11.)
Lech Kril wears an armor suit that hides their entire body and face and has some unusual piping, is extremely durable and strong, has powers involving supercooling and superheating, and communicates in the same screeching "language" as some Infestation bosses. It's plausible that Lech Kril is a Grineer equivalent of a Warframe, except their mind is not completely lost.
(Added 2023-04-11.)
Maybe you place tiny electron cloud projectors, maybe the Orbiter has them already built-in and you simply configure its software, or maybe it's all augmented reality in your HUD.
(Added 2023-04-11.)
Dying by failing to hack is actually because the security system hacks you back, shutting down the warframe and requiring a reboot.
(Added 2023-04-11.)
Nihil oubliette fight: we wipe his memory after every fight, and he thinks it's the first every time. Later we could restore his memory all at once, spectactularly blowing his mind and making him implode.
(Added 2023-04-11.)
Treasurers scurry away, pretending to safeguard Granum Crowns from us, while showering us with tasteful insults and accusations of Crown thievery. Tenno rob rather than steal. Treasurers' words seem like projection. If they were really trying to safeguard Crowns from us, they only needed to... not show up. Yet they come with alarming regularity. One possible conclusion is that they steal Granum Crowns from Nef Anyo's ships on behalf of Parvos, while trying to frame Tenno for it.
]]>I like the story. It seems to place value on growth and maturity. It implies that power should be wielded by responsible grown-ups, and seems to associate maturity with kindness. (Thought we can debate the kindness of a world full of suffering.) It plants the idea that we should be kinder, just in case something like this is true.
If this was the basis of a worldwide religion, the world would have been a much better place by now. Religions that promote reincarnation lack this one crucial piece that would fall into place like it's meant to be there. We missed a great opportunity thousands or hundreds of years ago. But it might not be too late; the world might be more receptive than it ever has been.
I immediately imagine a radical offshoot of such a religion. Looking at the state of the world, they interpret the god's idea of "growth" as "suffering" and make it their mission to spread as much misery and suffering as possible. Excluding their higher-ups, of course.
Let's indulge ourselves in thinking about the outcome. In such a scenario, everyone eventually merges into a single mind that combines all minds that have ever lived. Human, sub-human, super-human, alien, uplifted animal species, uploaded minds, AI, and more. When does this happen? The "god" has built a kill switch into the bio-species, but this can be overcome with technology. To get around that, it would also build a kill switch into the universe itself; let's say a Big Rip or a Big Crunch. This would conclude the incubation.
The idea of a single unified mind is intriguing. On the Future Shock scale, it probably places at level 4. Most people would be uncomfortable with such a future for themselves (unless, conveniently, it was planted by a religion they held since childhood). But consider this:
"Networked super-intelligence" refers to a type of intelligence that consists of individually intelligent parts that exchange information. In contrast, a non-networked super-intelligence consists only of "dumb" parts. The idea of unifying into a single mind may scare us with loss of individuality. But what is individuality but not a limitation, a border? How come I is I, and you is you? How can there be more than one ego, more than one point of view? Isn't it bizarre? Doesn't it seem kind of artificial?
We've been doing all we can to bridge this gap by inventing ways to exchange information. Body language, verbal language, rituals, drawing, music, writing, poetry, book printing, radio, TV, the internet. We seem to be moving towards some middleground between pure isolation and deep networking. What's that middleground? Does it stop at verbal exchange? Does it involve a technological telepathy that allows to share deeper thoughts and emotions? Does it go further and allow complete exchange and on-the-fly synchronization of entire persons, merging two, or more, into one? Imagine the ability to link into one, then unlink and diverge, then sync back again. When linked, do they count as two intelligences, or just one?
We have an interesting precedent and precursor: the two hemispheres inside each human skull. Each hemisphere is capable of running an entire human person. Compare split-brain syndrome caused by severing the connection vs. having one functional hemisphere and/or having the other hemisphere removed. In successful cases of the latter, one hemisphere does in fact run the entire person. In the case of split-brain, each hemisphere more or less runs a different person. In a healthy brain, the connection between the "brains" maintains an illusion of a sole ego.
An ideal mind-link technology would have controls for privacy and degrees of data sync. It should be possible to choose what to share and how deeply. Different individuals, pairs, groups, would link to different degrees. In the limit case, it would merge them completely for the duration of the connection. Become one, then many, then one again.
The ideas of "unified mind" and "networked super-intelligence" can be seen as special cases of this mind-link, varying only in degrees. More interestingly, there's no reason for the linking to be permanent. It could be on for work, for voting on important matters, then off for leisure, or some other variation. Such a civilization would be like us today: a networked super-intelligence, but with a higher degree of efficiency. Personally, I'd be excited. What about you?
]]>Note: check the accompanying post Game Impressions: Doom 2016 for my thoughts on the game and an analysis of what makes it enjoyable.
This is about single player only; I haven't tried the PvP.
If you find yourself bored, raise the difficulty, preferably to Nightmare. A non-Nightmare campaign can be upgraded only to Ultra-Violence, so you might have to start a new one.
The campaign is relatively short, but is never really "over". You're meant to replay individual missions in arbitrary order. You always keep your guns and upgrades.
Glory Kills become less viable on Nightmare because other enemies attack while you're locked into a recovery animation. Under fire, you're better off just shooting them.
On Nightmare, enemy shots tend to lead the target. Straight movement gets you shot.
Some enemies have surprising moves and behaviors. Ranged enemies can suddenly rush into melee. Imps have surprisingly agile melee moves. Pinkies will track you surprisingly well while charging, and can quickly turn 180 degrees to smack you. The list goes on. The developers have done an amazing job with the enemy movesets. When you find yourself going "WTF this move is bullshit!", appreciate the game's ability to surprise you!
With slow-firing weapons, it's faster to switch to another gun than wait for the recovery. Cycling between the Super Shotgun, Gauss Cannon, and Rocket Launcher, or at least two of these guns, can increase your burst DPS.
The runes are generally well-balanced, without any game-breakers or must-haves. Pick the ones that match your playstyle.
The Equipment Power rune allows Siphon Grenades to regenerate armor. Armored Offense lets you restore armor through Glory Kills. The Intimacy is Best rune helps this tactic by making enemies more easily staggered and keeping them alive. This stops being viable on Nightmare due to increased enemy aggression and damage. You get better results by focusing on not getting hit. You can recover off zombies between fights, but it's not worth the time.
In-Flight Mobility gives you more movement control in the air than you have on the ground. Handy for dodging.
Ammo Boost roughly doubles the ammo pickups. Makes ammo-intensive guns such as the Gauss Cannon more spammable. More consistently useful than Rich Get Richer.
Kills under the Berserk powerup count as Glory Kills. When berserking, use the appropriate runes, namely Seek and Destroy (launch from farther away), Savagery (kill faster), and Blood Fueled (move faster after Glory Kills). When Berserk ends, revert to your normal runes.
I recommend not buying the "exploration" upgrades. You don't want to constantly check the map for secrets. Instead, you'll replay each level using a video guide. Worse, once you find all secrets, one of the upgrades keeps beeping when close to where a secret used to be, with no way to turn that off.
As for the other upgrades, I would prioritize becoming immune to barrel explosions, then faster weapon swapping and ledge grabbing. Grenade and powerup upgrades are more situational, so get them later. Your mileage may vary.
Most guns have one useful mod, and some guns are more useful than others. You can upgrade most useful things around halfway through the game. With all secrets and challenges, you can max out everything by the end.
See below for weapon mod tips.
Has infinite ammo. Useful against zombies. They die from a single left-click headshot, but due to the low accuracy, you're better off with the right click at zero charge. Don't bother upgrading until you run out of useful upgrades for other guns. Don't bother using it against non-zombies, ammo is mostly a non-issue.
Explosive Shot is useful early; it gives you a strong, ammo-efficient mid-range attack that one-shots Imps and staggers Possessed Guards. Removing the charge delay makes it even better.
Charged Shot is comparatively useless. The charge delay makes it too hard to line up, and it's weak even at its best. You get better results by just focusing on your movement and aim.
This gun is nearly-obsoleted by the scoped Assault Rifle you can get in Mission 2, and completely eclipsed by the Super Shotgun you find in Mission 4. Don't bother upgrading it much.
Tactical Scope is the superior upgrade, if you can land headshots. Upgrades increase headshot damage and bullet damage, letting you one-headshot Imps and Possessed Guards, and two-headshot Hell Razers. With perfect aim, it's probably the quickest, most ammo-efficient way to murder humanoid enemies.
Micro-Missiles, while undoubtedly cool, are useless. They're supposed to let you spend ammo faster for more DPS. This niche is already filled by the Chaingun. The Missiles' DPS is way too low. The explosion radius is way too small. The explosion delay causes you to waste time and ammo. They have way too little stopping power, so enemies continue attacking you while being shot. Against humanoid enemies, which dominate the early game, you're better off headshotting them with the scope. Against tougher enemies, you need more DPS and/or stopping power, which every other gun does better.
Regardless of upgrades, the Assault Rifle DPS is too low against big hulky enemies, so it's better to specialize it against humanoids. In late game, it's superseded by the Rocket Launcher, which lets you clear minions faster and without exposing yourself as much.
Stun Bomb is overpowered. It instantly stuns enemies for several seconds, has a decently large radius, costs very little ammo, and has a short cooldown. It works on all non-boss enemies, even the hulky Mancubuses and Hell Barons. Get it, upgrade it, use it. Even though switching guns takes time, this actually increases your DPS by letting you safely shotgun enemies at melee range or line up a Gauss Cannon headshot. It also saves your ass against Pinkies, which charge with surprising speed and agility and have armored mugs.
Heat Blast manages to be both useless and boring. It deals splash damage in front of you, enough to one-shot humanoid enemies even at partial charge, but not enough to kill Hell Knights and other tough guys. It's completely eclipsed by the Rocket Launcher, which is spammable, has range, and has better stopping power. Use the Stun Bomb instead.
The Plasma Rifle is useful only for the Stun Bomb. Its DPS is not particularly brag-worthy. For rapid-fire damage, a scoped Assault Rifle is much better: it deals more headshot damage, hits instantly, and shots don't obscure the screen. The Gauss Cannon is much better at converting Plasma Cells into damage, with enough left-click damage to instagib humanoid enemies, enough charged damage to one- or two-headshot many big enemies, and excellent stopping power. Upgrade the Stun Bomb and use this rifle only as a combo piece.
Amazingly useful weapon that completely supersedes the Combat Shotgun. High single-shot damage, DPS, and stopping power. Has no equal at melee range. All humanoids die from a single shot, and most tough guys die in 2-4.
This weapon has no mods, and upgrades improve its primary firing mode. The mastery upgrade effectively doubles the rate of fire. I recommend fully upgrading it right away. It can be found in a "secret" in Mission 4 (Argent Facility). Make sure to hoard enough upgrade points.
The stopping power is surprisingly handy. It stops charging enemies and interrupts attacks. This makes it easy to finish them off with the next shot. You still need to dodge attacks, but the interruptions make it a lot easier.
Combines well with the Stun Bomb. The stun lets you safely shotgun the enemy at melee range and makes it easier to aim. The stun lasts long enough for 2 cycles / 4 shots. With good enough aim, this kills anyone but a Hell Baron, and those die from another shot or two.
Remote Detonation makes the weapon better at its unique job: quickly clearing groups of enemies. It passively increases the splash radius and damage, and allows you to detonate the rocket in better positions, catching more foes in the blast. Humanoid-sized enemies tend to be instagibbed by splash damage. The weapon mastery prevents the rocket from exploding along with the payload, making it possible to shoot big targets while splashing off to kill any surrounding vermin. I suspect that detonating the payload just before it connects increases the damage, but this is tricky to verify.
Lock-on Burst improves single target DPS, a job that other guns already do better. Humanoids already die from a single rocket, while big foes require more than one volley. It's not enough to compete the Super Shotgun, Gauss Cannon, or Chaingun. Cacodemons and Summoners can already be instakilled with a well-placed Precision Bolt. It's also suicidal at close range, which is exactly what the big meat boys charge into.
The Rocket Launcher's unique niche is to quickly kill groups of humanoid enemies. Late missions consist of arenas that spawn multiple waves, mixing humanoids and big guys. The launcher, particularly with the Remote Detonation evolution, is particularly good at killing humanoids with splash damage, requires very little aim, and can be used without exposing yourself. It easily supersedes the scoped Assault Rifle at this job. It also has good stopping power; while big guys require multiple rockets, said rockets will often stop their charge or briefly stop them from shooting you.
Both Gauss Cannon upgrades let you charge a more powerful shot, with similar charge times and damage values.
Precision Bolt is for long range, with a scope. A fully-charged Precision Bolt headshot instakills Hell Knights, Summoners and Cacodemons. Note that the Cacodemon weak spot is their eye, not the entire body. When upgraded, it doesn't impede your movement, but does impede aiming sensitivity, which is awkward at close range and even midrange. The mastery upgrade makes the victims explode, instakilling humanoid enemies in proximity, making the weapon useful against all targets.
Siege Mode is for midrange, without a scope. Some enemies that die from a single Precision Bolt headshot also die from a single Siege Mode body shot; examples include Cacodemons, Summoners, Revenants, and possibly more. It also one-shots Pinkies in the mug, the only non-BFG weapon that does. Unlike Precision Bolt, it impedes your movement while charging; you can work around this by hiding around corners, which you should be doing anyway. The wide beam makes it less accuracy-dependent than Precision Bolt. It doesn't impede aiming sensitivity, but does impede movement, which is risky at close range.
The Gauss Cannon is very well-rounded and works against all targets at all ranges, as long as you have decent accuracy. The only problem is ammo. Depending on your ammo capacity, it ranges from 10 to 23 shots for normal or Precision Bolt, and from 5 to 11 shots for Siege Mode. Upgrading the ammo capacity, using the Chainsaw, and using the Ammo Boost rune help make it more spammable. Late-game arenas also have more ammo lying around.
Mobile Turret has a (short) deployment time and impedes movement, but eliminates the spin-up time and greatly increases the fire rate. It deals very high single target DPS. Unlike the unmodded Chaingun, it doesn't impede your aim sensitivity.
Gatling Rotator appears to be inferior to Mobile Turret in every way. It still has a spin-up time, still impedes your aim sensitivity, and doesn't have enough DPS to compete with other guns.
The Chaingun is supposed to have high single target DPS against big enemies. Unfortunately for the Chaingun, this niche doesn't need filling. Mancubuses are easy to avoid, Summoners are evasive, and the other big guys tend to rush into close range, asking for the Super Shotgun. The Chaingun also requires you to stay exposed while shooting while lacking any splash effects, risking getting crossfired. The Rocket Launcher is much better against enemy groups, and the Stun Bomb trivializes isolated targets regardless of your weapon choice.
You find this gun about halfway through the game. It can instagib large groups of enemies.
The BFG shoots a slow projectile that damages all enemies around it and deals large damage on impact, kinda like the Quake 2 version. For better results, maximize the travel time by shooting into empty space rather than enemies or walls.
Maximum ammo is always 3, unaffected by suit upgrades. Starting with Mission 8 where you find it, each mission has a handful of BFG charges, so you get to use it sparingly. I don't know if chainsawing enemies can produce BFG ammo. The rough rule of thumb is that large arena-style rooms that spawn waves of enemies with often have a BFG charge. Some later-game arenas have more than one charge. The upgraded Ammo Boost rune gives all enemies, including zombies, a chance of drop a BFG charge regardless of how they die. This is handy if you find yourself relying on the BFG, and is the only way to replenish BFG charges in early missions.
Best targets for the BFG are whatever tends to kill you. Even the biggest enemies are easy prey to the Stun Bomb; enemies are more dangerous in numbers, particularly if the arena layout allows them to crossfire you. If you find yourself overwhelmed and cornered, the BFG guarantees breathing room to regain control of the fight.
The BFG doesn't instakill bosses, but deals serious damage and briefly stuns them, which is handy for interrupting hard-to-avoid attacks. Bosses occasionally drop BFG ammo in addition to all the other ammo they disgorge when damaged, so hoarding all 3 charges is basically wasting it.
That's all for now. Read the accompanying post Game Impressions: Doom 2016, and have fun!
]]>Spent the past week absorbed into Doom 2016. My "let's play" is being gradually released on Youtube: playlist link. This post summarizes my impressions.
I've tried to phrase this from my subjective perspective: "what I like" rather than "what's good", because everyone likes different things. The accompanying post Tips and Tricks: Doom 2016 contains advice on how to play, while this post analyzes the game's design. This is about single player only; I haven't tried the PvP.
Things I like:
Things I don't like:
Things I'm lukewarm on:
I really like how this game implements difficulty levels.
Difficulty doesn't seem to affect enemy spawns or enemy health. Instead, it makes them more aggressive, accurate, and damaging. You also get less health and armor from pickups. You have to pay more attention, think faster, dodge enemy attacks better. But monsters don't get any healthier. By playing well, you can still do a glorious slaughterfest, while always a few errors away from death. It checks your skill, not your gear or patience.
I particularly like how much difficulty is added through the enemy behavior, not through numbers. On lower difficulties, enemies pause between actions, while on Nightmare they won't give you any slack. Sometimes they will use surprising moves, like suddenly switching from long-range shooting to a melee rush. Their shots also start leading your movement, a subtle change that requires smarter dodging on your part.
This reminds me of the best 3rd person melee games, which use basically the same approach. In Devil May Cry 1/3/4/5, enemies get more aggressive and hit harder as you raise the difficulty. On the highest difficulty, you die in one hit. It's the ultimate skill check the requires a perfect performance. The game is carefully designed to make this hard but possible.
This isn't new to id Software games or shooters in general, but many still get it wrong. For example, I enjoy the Borderlands series, but it relies too much on stats, turning higher "difficulties" into a pure gear check, and is particularly guilty of bullet sponge enemies.
I also like that dying doesn't cost you time. The game saves between each "arena" encounter. Dying just forces you to replay the last encounter that killed you, and do it properly this time. Monsters also disgorge health packs when your health is low, and ammo is everywhere, so you can't get stuck by entering an arena unprepared. These nice quality-of-life features make higher difficulties comfortable while still dangerous.
If you consider yourself good at shooters, try the Nightmare difficulty. You'll die a lot while learning, but this just makes getting on top more rewarding.
The game has a healthy variety of monsters, which are well-animated, well-programmed, and have a healthy variety of moves.
I feel like the monsters are animated better than in most games. There's a certain smooth, fluid feel to their moves. I'm not educated enough to describe this in technical terms, but I certainly appreciate the animators' work.
Some enemies have multiple attack patterns, which makes them harder to predict and requires you to pay more attention. For example, Imps can throw fireballs on the move, sometimes several in a row, sometimes while hanging from walls, or charge bigger fireballs. Attack frequency seems to vary; I haven't noticed a set pattern. Imps smoothly switch between ranged and melee. When close, they'll go for melee, and may pursue you aggressively. They can also just randomly decide to rush into melee on their own. Or they can start with melee and run away for ranged attacks. The lack of a set pattern breaks up the rhythm and requires attention, which I quite enjoy. This is similar for other ranged enemies, though Imps are probably the most complex.
Melee enemies are comparatively more primitive and predictable, but also have a bit of move variety. Hell Knights can jump-slam for splash damage, lunge to grab, turn around with an uppercut, and more. They'll always charge you, which makes them a bit too easy to predict. I would probably appreciate if melee monsters at least tried to dodge.
The game has four upgrade progressions:
I don't really like the health and armor progressions. By the end, it doubles your health and triples the armor. It just boosts your numbers without changing how you play, exactly what I praise this game for not doing much. It adds to the power creep, widening the difficulty gap between the early and late missions and thus impeding replayability. The game could have been better without it.
Utility systems is stuff like faster weapon swapping or becoming immune to barrel explosions. They don't affect the game much; barrel immunity is the only major effect. I'm guessing they added this to incentivize secret hunting. Fortunately, these upgrades don't increase your power much. They could be taken out of the game and nobody would notice.
I really like the design of weapon mods and runes, see below.
The game has 8 guns, and 6 of them have "mods" for another firing mode. You gradually earn upgrade points, and can spend them to improve those mods even further.
What I really like about this design:
Mod balance isn't perfect; I consider almost half of them useless. See the weapons section of the accompanying tips & tricks post. But I still really like the approach.
You can eventually obtain and max out all upgrades. The game doesn't stop you from playing with all of its toys.
The weapons themselves are what you'd expect to find in a Doom game. Without mods, they're not particularly imaginative, at least nowhere near the level of Painkiller or Unreal Tournament. But mods and how they match up against different enemies are more than enough to compensate.
In a nice touch, late-game guns share ammo with early-game guns. This has numerous benefits. You can more reliably find ammo for any particular weapon. In intense fights, you don't have to cycle through unwanted guns just because you ran out of ammo for a favorite; chances are, you have a favorite for each ammo type. This also improves replayability: early missions have ammo for late-game weapons.
I appreciate the "mastery" challenges required for the last upgrades. Some of them force you to pay attention and try something new. For example, the Tactical Scope mastery requires many headshot kills, training you to use the Assault Rifle the right way. Without this challenge, I probably wouldn't realize how effective headshots are. The Precision Bolt challenge teaches you to one-headshot Hell Knights, which you normally wouldn't try. Not every challenge surprised me, but it's a good try of a good idea.
Runes passively change something about your character. You gradually earn 12, but can only use 3 at any time. This means you don't just max them out and forget about it; you'll keep thinking which runes to pick for a given situation or playstyle, which can be interesting.
Unlike other upgrades, they're unlocked and upgraded through challenges rather than points. The unlocking challenges teleport you into rooms with very strict rules, while the upgrade challenges are done through normal gameplay. Just as with weapon upgrades, I really like this approach. It provides a different kind of gameplay and can teach you something new. For example, one challenge requires you to survive a dangerous fight on 1 health. You might have to learn to dodge some attacks you didn't before. The same challenge also forces you to use a Gauss Cannon with an un-upgraded Siege Mode, which stops your movement; this suicidal tool turns out to be vital against charging Pinkies. That challenge can teach you a lot, and you wouldn't try that during normal gameplay.
Rune upgrades are done by performing a certain action N times. I find the requirements a bit too bland. Unlike weapon mastery challenges, they tend to require actions you're already doing: Glory Kill N demons, pick up N armor, and so on. They tend to just happen in the background without requiring much thought. But the overall concept is solid; I prefer it to any currency-based unlocks.
What I like the most is that rune bonuses are qualitative, not quantitative. No rune gives you flat +power. Instead, they make a change in your behavior. Consider In-Flight Mobility: it increases "air control", which is how quickly you change direction in the air. As minute as it sounds, this is handy when enemy shots lead your movement and you need to change direction constantly. It's also handy for platforming. The only reason you can consider using this is because it doesn't compete with a rune for +damage or +protection. There isn't one, so you can play with the interesting qualitative effects. This also keeps the player's power in check, allowing to keep the early and late missions closer in terms of difficulty, which is important for replayability.
This section shifts from praise to critique and talks about wider game design principles. Feel free to skip. I should probably develop this into a separate post.
I like the ability to replay missions in arbitrary order, while keeping and even advancing the upgrades. Would be even better if scripted scenes were skippable. That said, eventually you get bored.
I've heard phrases like "the game doesn't outstay its welcome, and that's fine". While logical, it misses a larger point. It assumes that boredom was inevitable. What exactly leads to it, and what could postpone it?
Many get bored during the first playthrough. Those people should raise the difficulty to Nightmare. You can't expect such a mechanics-centric game to be interesting when it doesn't challenge you. Let's talk about replayability after the first playthrough.
The game consists of a few fixed, hand-crafted maps. Each map spawns the same monsters in the same locations, and most objectives are linear. You can play missions in a different order, and for some time, there's variety in trying different guns, weapon mods, rune combinations, tactics, and learning how to handle each monster type. But eventually you memorize each map, each encounter, and start relying on pre-set patterns that trivialize any challenge.
In short: replayability requires novelty, and the fixed structure impedes it by definition.
The last few years, I've been fascinated with how "roguelike" games make themselves endlessly replayable by branching or randomizing most elements of the gameplay. My favorites are FTL and Slay the Spire. They consist of short "runs", and each run branches or randomizes the map, enemies, events, and the tools you get to use. This creates a combinatorial explosion of scenarios, making each run unique. In my view, the key to keeping it fresh is that each run, you have to adapt your tactics to the different tools and enemies you find, and that's only a fraction of the possible combinations, of the possible tactics. The more different it can be, the more potential variety there is, the better. Only the rules of the game need to stay consistent. The same principle should work for FPS games.
Traditional storytelling requires a mostly-linear structure. But in such a mechanics-centric game, I would happily trade the coherent narrative for branching or randomization that improves replayability. We can find other ways of telling the story. It could be pieced together from scattered pieces, like a puzzle. In Doom, the story is merely a backdrop for the action anyway.
FPS games with roguelike elements do exist, but it takes many attempts to produce a catchy masterpiece. Among hundreds if not thousands of tactical 2D roguelikes, I like only two: FTL and Slay the Spire. This doesn't mean every contender is worse, but they might not have gotten as lucky with marketing. The fact remains that it can take hundreds of games before a big success. So keep trying, developers.
That's all for now. Read the accompanying post Tips and Tricks: Doom 2016, and have fun!
]]>For the last month or so, I've been absorbed by Astroneer (link: https://astroneer.space). It's an amazing game that released just recently (Feb 6).
I've started making video guides on it, tending towards more advanced aspects of the game that could otherwise go unnoticed or hard to figure out. The series, called Astrotips, has just started. Subscribe to my Youtube channel for regular updates. See you there!
Channel link: https://www.youtube.com/channel/UCt6dH_XZxJCgaa6vwqrwFxA
Astrotips playlist: https://www.youtube.com/playlist?list=PLfygJGWNJ-9WaNWXim4P7lLwZ0ooSWLQ4
]]>Edit 2020-10-21: see the newer post Language Design: Case Conventions.
Programming has the concept of an "identifier". Identifiers are used for keywords, variable names, etc. Most languages restrict identifiers to Latin letters, digits, and an underscore.
An identifier may consist of several words without spaces. The commonly used case styles can distinguish individual words:
oneTwo -- lower camel case
OneTwo -- title camel case
one_two -- lower snake case
ONE_TWO -- upper snake case
one-two -- lower kebab case
ONE-TWO -- upper kebab case
All have at least two desirable properties: "separability" and "consistency". Words must be separable; consistency ensures this rule is followed without exceptions.
A problem peculiar to TitleCamelCase
is how to treat abbreviations. Behold this monstrosity from JavaScript's DOM API:
XMLHttpRequest
It's inconsistent: XML
is spelled in capitals, while HTTP
is spelled in title case, like a word. What gives? There were three ways to spell it out:
XmlHttpRequest
XMLHTTPRequest
XMLHttpRequest
We see that (2) breaks separability while (3) breaks consistency. The general conclusion is that insisting on abbreviations leads to weird names, and is not compatible with the desirable properties of case styles.
The only generally consistent approach is to ignore abbreviations, i.e. treat them as words:
XmlHttpRequest
As a bonus, non-abbreviated TitleCamelCase
is easier to automatically parse, convert to other cases, and reverse. Example:
XmlHttpRequest -> xml_http_request -> XmlHttpRequest
For automatic tools, parsing inconsistent abbreviation is not impossible; for example, my Sublime Text plugin for converting between cases can handle this. But it's still not reversible.
Finally, having just one choice means less thinking, which is good.
That's all.
]]>The Go programming language espouses "less is more". It prefers fewer features and "one way of doing things". However, it still has some fat to lose! This article highlights what I consider unnecessary, and suggests the path to gradual deprecation and removal.
Goes without saying: this is an opinion piece. If we disagree, that's cool!
This is just what I consider relatively easy to remove. I have other complaints about Go, mostly related to its deep fundamentals that would be very hard or impossible to change. They're not mentioned in this piece.
We're not allowed to break existing code under Go1. However, it seems plausible to migrate most existing code in advance, preparing it for the hypothetical Go2 that removes the deprecated features, alongside other breaking changes it's expected to make. The following migration strategy seems realistic:
go fix
converts existing code to the "new" style, avoiding the "deprecated" features:=
in favor of var
Remove iota
Maybe remove new
in favor of &
Remove :=
in favor of var
1. Having two equivalent assignment forms is redundant.
2. :=
can't justify itself with brevity. Compared to var
, it requires one or two fewer keystrokes to type, but involves Shift
and an awkward movement between :
and =
. Subjectively, I find var
easier and faster to type.
3. Code sometimes needs to be converted between :=
, var
and const
. For example, you have a string that's initially produced by fmt.Sprintf
, but as you edit the code, it becomes a const
. Or vice versa. I find these conversions fiddly and awkward. Converting between var
and const
is noticeably easier.
Moving a declaration between local and global scopes also involves converting between :=
and var
. This should be unnecessary.
4. Some idiomatic code already prefers var
. For example, it's commonly used for zero values:
var buf bytes.Buffer buf.WriteString("hello world!") _ = buf.Bytes()
5. As shown above, var
allows to specify the type. Type inference is nice, but sometimes you have to spell it out:
num := 10 num := float64(10) var num float64 = 10 var num = float64(10)
Without :=
, you'd have less choice, which is good.
6. var
also allows the blank identifier:
var _ = 123 // compiles _ := 123 // doesn't compile
7. var
is also better for code highlighting. While writing a Go syntax definition for Sublime Text, I found that it's impossible to correctly scope the following:
one, two := someExpression
Scoping the variable names as declarations with :=
requires multiline lookahead or backtracking, neither of which is supported in the modern Sublime Text syntax engine.
With var
, this can be properly scoped without multiline lookahead or backtracking:
var one, two = someExpression
Completely embracing var
requires an addition to the language. Various forms of if
, for
, select
, and switch
currently support :=
but not var
:
// compiles ok select { case err := <-errChan: case msg := <-msgChan: } // doesn't compile select { case var err = <-errChan: case var msg = <-msgChan: }
For Go1, adding the missing var
support would be a safe, backwards-compatible change.
See the related . gofmt change
var
, const
, type
, import
Let's start with arguments in favor of the feature.
Currently, parenthesized lists have exactly one non-aesthetic reason to exist: const (...)
enables the use of iota
, acting as its scope.
import
is traditionally listed, so the keyword doesn't repeat:
import ( "bytes" "encoding" "encoding/base64" )
import "bytes" import "encoding" import "encoding/base64"
That's a weak-ass justification for an entire language feature, made even weaker by goimports which edits your imports
automatically.
Now, arguments against the feature.
Code should be convenient to type and edit. I think having options hinders that. Every time you write adjacent vars, some of your neurons are wasted on choosing between:
var one = _ var two = _
and:
var ( one = _ two = _ )
Worse, it occasionally leads to menial conversions between the two. That's a waste of brainpower and typing. Let's say you have a single var:
const one = 10
Now you're adding another:
const one = 10 const two = 20
You might be compelled to convert to the list style:
const ( one = 10 two = 20 )
We've now wasted some brainpower and typing. Without lists, this would not have happened.
For consistency, the go.mod
syntax should also remove lists.
iota
due to removing listsiota
requires parenthesized const (...)
for scoping. Removing lists also leads to removing iota
.
While I tend to avoid iota
, I don't have a strong argument against it. If keeping iota
in the language is important, then instead of removing lists entirely, we could just consider them non-idiomatic unless iota
is used.
new
in favor of &
new
was relevant when &
was allowed only on "storage locations" such as variables and inner fields. Now that &
is allowed on composite literals, new
is close to obsolete.
new
is limited to a zero value, while &
allows content:
client := new(http.Client) client.Timeout = time.Minute client = &http.Client{Timeout: time.Minute}
Currently, &
doesn't work with non-composite literals:
// doesn't compile _ = &"hello world!"
Before new
can be removed, &
needs to be extended to support primitive literals. That would make it strictly more powerful than new
. (Edit 2020-10-19: some types, such as interfaces, don't have literals and can never be instantiated with &
, but can with new
.)
Allowing &
on primitives would also make it easier to print Go data structures as code. Currently, pretty-printing libraries have to resort to ugly workarounds to support those types.
Note that most code can already be converted to &
. Code like new(string)
or new(int)
should be rare in the wild.
For Go1, extending &
to primitive literals would be a safe, backwards-compatible change.
import . "some-package"
Dot-import splurges all exported definitions from another package into the current scope:
import . "fmt" func main() { Println("hello world!") }
Having read a considerable amount of code in multiple languages with this import style, I'm convinced that it's always a bad idea. Subjectively, it makes the code harder to understand and harder to track down the definitions. Objectively, it makes the code more fragile against changes.
if _ := _ ; _ {}
Subjectively, I find this form annoying to type and annoying to read. Objectively, it's a choice, and this post is predicated on "choice is bad". This wastes everyone's brainpower; anyone reading the code has to be aware of both syntactic forms.
Instead of two options:
if ok := _; ok { _ } ok := _ if ok { _ }
Let's leave just one option:
var ok = _ if ok { _ }
If subscoping the variable is vital, just use a block. This also allows you to subscope more than one variable.
{ var ok = _ if ok { _ } }
(This entry was added on 2020-06-11.)
In Go, the following forms are equivalent:
var _ = 0.123 var _ = .123
The short form works only for numbers below 0
and is not essential. The long form is essential and more general. Subjectively, I find the short form slightly harder to read; my brain starts thinking about typos and other syntactic forms involving dots. Objectively, it creates an unnecessary choice. Let's leave just one option: the "long" form.
var
, const
, type
, import
Currently, gofmt
aligns adjacent assignments only in parenthesized lists:
const ( one = 10 two = 20 three = 30 ) const one = 10 const two = 20 const three = 30
After , we probably want removing parenthesized listsgofmt
to align adjacent non-parenthesized assignments:
const one = 10 const two = 20 const three = 30
While writing this post, I tried to argue that complex numbers should be moved from built-ins to the standard library, but ended unconvinced.
Arguments for moving:
Arguments against moving:
math
strconv
, fmt
, encoding/json
, encoding/xml
, etc.In the end, I'm not convinced that it's worthwhile.
Have any thoughts? Let me know!
]]>Welcome and/or welcome back!
This place is intended as a blog about programming and tech in general, possibly with a sprinkling of philosophy and entertainment. There was a burst of activity in 2015, followed by three and a half years of hiatus.
In 2019, I intend to blog regularly. I have a huge backlog of topics to cover and opinions to share. They roughly fall in the following categories:
Optimizing website performance is tricky. There's plenty of articles delving deep into technical detail, like this great guide by Google. Naturally, when you make it that hard, most people aren't going to bother.
What if I told you there's a way to dramatically speed up page transitions just by adding a library? With zero or few code changes? And it's overlooked by the contemporary blogosphere?
Demo time! https://mitranim.com/simple-pjax/
Who benefits from this?
As you might have guessed, we're going to exploit clientside routing with history.pushState
. It's usually considered a domain of client-rendered SPA, but what a mistake that is!
When you think about it, the status quo of content delivery on the web is insane. We're forcing visitors to make dozens of network connections and execute massive amounts of JavaScript on each page load on the same site.
👎 Typical page transition
With pushstate routing, we can do better.
👍 Page transition with pjax
The idea is dead simple. Say a user navigates from page A to page B on your site. Instead of a full page reload, fetch B by ajax, replace A, and update the URL using history.pushState
. This technique has been termed pjax
.
Here's a super naive example to illustrate the point. (DON'T COPY THIS, SEE BELOW)
document.addEventListener('click', function(event) { // Find a clicked <a>, if any. const anchor = event.target do { if (anchor instanceof HTMLAnchorElement) break } while (anchor = anchor.parentElement) if (!anchor) return event.preventDefault() const xhr = new XMLHttpRequest() xhr.onload = function() { if (xhr.status < 200 || xhr.status > 299) return xhr.onerror() // Update the URL to match the clicked link. history.pushState(null, '', anchor.href) // Replace the old document with the new content. document.body = xhr.responseXML.body window.scrollTo(0, 0) } xhr.onerror = xhr.onabort = xhr.ontimeout = function() { // Ensure a normal page transition. history.pushState(null, '', anchor.href) location.reload() } xhr.open('GET', anchor.href) // This will automatically parse the response as XML on the fly. xhr.responseType = 'document' xhr.send(null) })
I have fashioned this into a simple, fully automatic library. Just drop it into your site and enjoy the benefits. Feedback and contributions are welcome! If you happen to find a better implementation, I'd be happy to hear about it.
Despite the simplicity, the benefits are stunning. This gives your multi-page website most of the advantages enjoyed by SPA. The browser gets to keep the same JavaScript runtime and all downloaded assets, including images, fonts, stylesheets, etc. This dramatically improves page load times, particularly on poor connections such as mobile networks. This also lets you maintain a persistent websocket connection while the user navigates your server-rendered multi-page app!
Also, I can't overstate how wasteful it is to execute all scripts on each new page load, which is typical for most websites. I just checked Wired and the total execution time of all scripts was 480 ms before ads kicked in. Each new page reruns all scripts. Using pjax, you can eliminate this waste, keeping your website more responsive and saving the visitors' CPU cycles and battery life.
You need to watch out for code that modifies the DOM on page load. Most websites have this in the form of analytics and UI widgets. When transitioning to a new page, that code must be re-executed to modify the new document body.
Before a transition, you'll need to perform teardown like unmounting React components or destroying jQuery plugins. Do that in a document-level simple-pjax-before-transition
event listener.
After a transition, you'll need to run the same setup as on the first page load. Do that in a document-level simple-pjax-after-transition
event listener.
simple-pjax
also reruns any inline scripts found in the new document body, which makes it compatible out-of-the-box with common analytics snippets.
You'll also need to take special care of widget libraries with a fragile DOM lifecycle, like Angular or Polymer. They break when document body is replaced. Notably, React is perfectly compatible; just make sure to unmount all components before replacing the body.
Pjax has been around for a few years. There are a few implementations floating around, like the eponymous jQuery plugin. Pjax is baked into Ruby on Rails and YUI. Many sites use it in one form or another.
Why isn't pjax more popular? Maybe because people overengineer it. The libraries I've seen tend to focus on downloading partials (HTML snippets). They require you to micromanage the markup, and some need a special server configuration. I think these people have missed the point. The biggest benefit is keeping the browsing session alive, and this can be achieved with zero configuration or thought. For most sites, this is enough, and additional effort is usually not worth it. Is this wrong? You tell me!
Let's use this technique to improve the web!
]]>Have been turning into a bit of a performance nut lately. This is what I've found useful for speeding up websites. These are mostly frontend optimizations; I'm not going to delve into server performance here.
By far the most important thing to optimize is images. There are great free tools like graphicsmagick that let you automatically compress images without visible quality loss, rescale to different dimensions, crop, etc. They can be a part of your standard build chain, so there's absolutely no excuse for not using them. See example (scroll down to image processing).
Another important thing to compress is JavaScript. Modern JavaScript libraries (and hopefully your application's code) tend to be richly commented, bloating the source size, with the expectation of being minified for production use. With massive frameworks like Angular, React, or Polymer, the total size easily rockets past a megabyte. Minification gets it down to manageable size.
Minifying CSS is usually less important, but like everything else, it's a useful optimization and there's no excuse for not doing it.
Network latency is a huge deal. I can't stress this enough. Depending on the connectivity between your servers and your users, latency could range from 50ms to as much as a second.
If you serve assets as multiple independent files, the browser has to make separate network requests for each. Browsers only download a few assets at a time, stalling other requests, which means any additional, say, stylesheets delay the beginning of loading for other assets like images or fonts. Even when everything is cached and elicits a 304 "not modified" response, the browser still has to wait longer before rendering the entirety of the page.
That's bad. To avoid that, make sure to concatenate assets used on each page, like stylesheets, scripts, and icons (see below on that).
Update: see this in-depth post on pjax.
Pjax is a cheap trick that combines history.pushState
and ajax
to mimic page
transitions without actually reloading the page.
The basic idea is dead simple and can be implemented in a few lines of code.
Attach a document-level event listener to intercept clicks on <a>
elements. If
the clicked link leads to an internal page, fetch the page by ajax, replace the
contents of the current page, and replace the URL using pushState
. For
browsers that don't support this API, you simply fall back to normal page
transitions.
Despite the simplicity, the benefits are stunning. It gives you most of the advantages enjoyed by SPA (single page applications). The browser gets to keep the same JavaScript runtime and all downloaded assets, including images, fonts, stylesheets, etc. This dramatically improves page load times, particularly on poor connections such as mobile networks. This also lets you maintain a persistent WebSocket connection while the user navigates your server-rendered multi-page app!
There are a few implementations in the wild, but they require clientside and server-side configuration. If you're like me, this will seem like a waste of time. The biggest benefit of pjax is keeping the browsing session. Micromanaging partial templates is probably not worth your time, but everyone's needs are different.
I wrote a simple pjax library that works with zero config. Check the gotchas to see if it's usable for your site, then give it a spin or roll your own! The library is also used on this very site. Inspect the network console to observe the effects.
There's a trend towards single page applications (SPA) with clientside routing and rendering. They tend to skip server-side rendering in favor of being data-driven, usually through a RESTful API. As a result, they tend to have slow initial page loads. This is bad, particularly on slow connections, which is typical for mobile.
Practice has shown that for consumer-facing websites, initial load time matters. On top of that, lack of prerendering costs you SEO. Don't fall into this trap; server rendering is a sacrifice you don't have to make. Some JavaScript UI libraries, like React, already support isomorphic routing and rendering, and other frameworks, like Angular 2 and Ember, are planning to support it. Make sure to research this feature for your stack of choice.
If your application is JavaScript-heavy, you should use a module system with lazy loading. This is supported by the ES6 module system, and you can use it today with SystemJS and, optionally, jspm. You can also achieve a similar effect with AMD.
The core parts of the application should be bundled into a single file, and big but optional parts may be imported asynchronously when needed. If your app is small, you can skip lazy loading and bundle the entire app.
Most sites need icons. In the past, we had to use raster images. However, in the
days of widespread retina displays, @font-face
, and SVG, that's a poor option.
Hopefully you have switched to the vector alternatives: icon fonts and SVG
icons. They scale to any display sharpness and are easy to style with CSS.
SVGs can be embedded into the document or base64-encoded directly into your CSS, eliminating icon flicker on page load. They can also be directly manipulated with JavaScript for cool visual effects. On the other hand, icon fonts are easier to set up and use, and cost less bandwidth than embedded SVGs. For most sites, a mix of both solutions will probably be optimal.
This goes without saying, but you should double check to make sure your server is properly configured for static files like images, stylesheets, and scripts. It should include headers that tell the browser to cache the file, and respond with 304 for unchanged assets. This eliminates a lot of redownloading, reducing latency+download time to latency+0.
Network latency is a huge deal. It's a part of each request made by the browser,
even for static assets with 304 responses. The browser blocks page rendering
while downloading the document and anything included in <head>
, which defines
how snappy or sluggish your site feels. The browser may also wait for the first
few images (Firefox seems to have this tendency), or it may choose to render the
page and later flicker them into view, and latency determines how quickly this
happens.
On many sites, the document is rendered dynamically and involves database access. This absolutely needs to be fast, but this work is usually done once per page load. The rest comes from network latency for the document and assets. Make sure to use a web hosting with low latency times for your target audience. If your audience is all over the world, pick a server with good average latency and use a caching proxy / CDN like CloudFlare to reduce latency for static content.
Simple websites with one maintainer, like a personal page or a blog, don't need a scripting engine with a database. You can prerender them into HTML files, then serve with nginx or on a service like Github Pages. Dynamic functionality can be implemented with ajax.
Serving static files is naturally more performant than rendering templates on each request. They're also automatically subject to caching. When the base document is cached, some browsers may serve the entire page, including assets, from the cache, rendering it with zero latency.
Static site generators are plentiful, and if they don't float your boat, you can write your own in an afternoon.
]]>Lately I've been trying to figure out how to write shorter programs. Or, more generally, how to design simple solutions.
I often hear that "less is more", that you should KISS and follow YAGNI, yada yada. A small program is easy to understand and cover with tests. A simple API is pleasant to use. But that's still abstract. What's a practical recipe for keeping things small? We might define two attack vectors:
This approach is as simple as it gets. Saying no to a problem spares you from having to implement a solution.
Sometimes you need to draw a line and say that this feature shouldn't be in the library, the user should write a bit of glue code instead. Or that this extra concept is not worth the code savings it produces.
For programs with one well-defined function, this is known as the Unix philosophy and is straightforward to follow. But it's also useful for programs with a potentially unbounded scope, like a data modeling library or a language compiler. A surprising number of ideas turns out to be dead weight after a while.
Curiously, this takes willpower, or restraint, which seems to be an unpopular feature with developers. Adding moving parts is interesting. Being lazy is not enough; you have to apply mental effort to refuse additions and keep things simple.
Programs with an unbounded scope accumulate complexity as a result of tackling new problems, usually in response to feedback. Feedback tends to focus on specific use cases. Addressing them individually leads to accumulating special case solutions, even for problems that could be addressed with a general case feature, if this class of problems could be foreseen in advance.
Feature feedback also indicates that the application scope perceived by users outranges its design scope. Including a new feature or addressing a new use case would expand its implementation scope, which should be defined by the design scope, not the other way around. Which means agreeing to expand a program should begin by exploring and expanding its design scope, as if the system was being designed anew.
Therefore the default reaction to a feature request should be figuring out what class of problems it represents, and either refusing it entirely, or addressing the entire class instead.
Every person is different, but for me, both things boil down to restraint. It's tempting to add new moving parts. It's tempting to address a special case instead of figuring out a wider class of problems and a solution that covers them all. You need to stop yourself, take a step back, and remember that taking the time to find the right problem to solve will spare you from throwing solutions away.
]]>Silly proposal to change Magic the Gathering Online.
Was so moved by Deus Ex: Human Revolution that I wrote a big "thank you" to the company that made it.
]]>