Making sense of object-oriented programming

You're a programmer. You always know the exact specifications of the thing you've set out to model. You can always trust dependencies you rely on to just work and never break. You can trust that the environment you're programming for remains stable und unchanging for time immemorial.
Your neurologist suggests that your naivety and optimism may be cause for concern. Or maybe you just woke up to an angry customer's mail complaining that the feature they had requested wasn't realized to their satisfaction. Or did you wake up in front of your screen, still having to fix that dependency issue that came up in yesterday's update? In any case, you probably don't have much time to think about where things started to go wrong.

At its advent programming was about modeling closed algorithms. But increasing hardware capabilities led to exponential complexity of software. At some point it made sense to store subroutines, blocks of code that programs could run at any time, to facilitate recurring computations that would have otherwise been required to be implemented by hand multiple times. With the adoption of computers by businesses it became increasingly necessary to not only write, but successively modify existing programs. Programmers in the late 50s already reasoned about structure of control flow to aid to help with keeping programs readable. The next step was to introduce structure to programs as a whole. If you need to look at a certain functionality it helps to have functions spread across many files having names descriptive of the functions they contain.

Progress in computer and display technologies led to programs requiring graphical interfaces. Programs no longer needed to only model business logic as simply mapping operations to text commands was no longer enough to satisfy users. Videogame development centered increasingly less on the rules that would define games while seeking to model one virtual world greater than the other. Businesses placed demands on software development that structual programming had problems answering. At the same time programmers realized that contemporary hardware allowed them to design abstractions and generalizations that would promise to managers more developer productivity and a lowering of costs incurred by software development.

Mutable state of programs that had to be modified frequently kept becoming a bigger and bigger problem. Software was to become more robust and stable by trend towards separation of responsibilities into modules. Objects later followed that paradigm to an extreme, and the principles advertized by object-oriented design eventually led to layers of abstractions over abstractions.

But what is object-oriented programming anyways? In general it's defined by a set of features:

Classes

Classes are blueprints for functionality. They define attributes and behavior of objects that can be derived from them. Objects can also access state shared among all objects of one class (usually called static or class attributes). The idea is that code should relate more closely to objects of the real world. You could say this is the lowest layer of abstraction employed by object-oriented design.

Inheritance

Classes can inherit functionality from parent classes. This is used to reduce code duplication and to allow for polymorphism.

Polymorphism

Objects of the same class hierarchy can listen to the same method names and still run their own code. They can be used interchangeably.

Encapsulation

State and methods can be hidden away from external classes to permit only objects from their own class or one from their hierarchy to access them.

But there is more to the concept. Bad object-oriented code is at least as dangerous as bad procedural code (actually, it's a lot worse). Principles born from the ideas of modular programming have to be followed to keep large systems extendable and maintainable. Putting it short these principles encourage programmers to avoid modifying existing code, to write abstraction layers that enable certain classes to be interchangeable and to avoid introducing state that is directly shared among classes of varying responsibilities.

I learned classic procedural programming in its most straight-forward definition before really looking into object-oriented programming and learning about the concepts and principles that would keep sections of code decoupled. It took me a while to warm up to many of the abstractions and indirections employed by modern object-oriented code, and to really understand the need for most of this stuff. Now while I've become fairly comfortable writing programs in this fashion, I'm still not entirely sold on many of these concepts and principles.

Most features of OOP are somewhat superficial. You can write any program in strict procedural manner, you're not missing out on functionalities by not using the OOP's features. By attempting to model the real world object-oriented code is supposed to be easier to understand and write, but in reality its layers of abstractions and indirections make it harder to follow than its procedural counterpart. Not every kind of data is best represented by a class of objects, and forcing certain data types into the object approach can feel very awkward. Object-oriented design patterns often feel absurd and like band-aids that attempt to mimic object-orientation in situations where it doesn't fit.

So I've got mixed feelings about the object-oriented approach. Sometimes inheritance is nice when it's kept to a minimum, but it never felt strictly necessary to me. In a way I like encapsulation, but I also feel like it's encouraging programmers to dismiss documentation (which in my opinion was a major contributing factor to the Log4Shell security issue). Still, classes are not strictly necessary to implement encapsulation, since the concept can also be realized with plain modules. Something about the concept of calling methods on objects, instead of passing data to functions, still rubs me the wrong way. I've come to see the benefits of writing decoupled systems, but a lot about programing with all these indirections still feels awkward. It's also ridiculous to demonize global state in general when it's still perfectly valid in systems of small scale and programs that aren't expected to receive a lot of ongoing changes.
At this point I just wish I had the time to write a decently sized program in both approaches to better compare them. Right now the paradigms seem to cancel each other out. Object-oriented architecture allows the programmer to write less lines of code at the cost of greater complexity. Procedural and modular code on the other hand has the benefit of being less indirect and thus easier to follow. It's also got less computation overhead and allows for better optimizations.