Can mathematical theorems be proved with 100% certainty?
Here is why “Reason is my only master”
The most Base Presupposition begins in reason. Reason is needed for logic (logic is realized by the aid of reason enriching its axioms). Logic is needed for axiology/value theory (axiology is realized by the aid of logic). Axiology is needed for epistemology (epistemology is realized by aid of axiology value judge and enrich its value assumptions as valid or not). Epistemology is needed for a good ontology (ontology is realized by the aid of epistemology justified assumptions/realizations/conclusions). Then when one possesses a good ontology (fortified with valid and reliable reason and evidence) they can then say they know the ontology of that thing.
So, I think, right thinking is reason. Right reason is logic. Right logic, can be used for mathematics and from there we can get to science. And, by this methodological approach, we get one of the best ways of knowing the scientific method. Activating experience/event occurs, eliciting our feelings/scenes. Then naive thoughts occur, eliciting emotions as a response. Then it is our emotional intelligence over emotional hijacking, which entrance us but are unavoidable and that it is the navigating this successfully in a methodological way we call critical thinking or as In just call right thinking. So, to me, could be termed “Right” thinking, that is referring to a kind of methodological thinking. Reason is at the base of everything and it builds up from pragmatic approaches. And, to me, there are three main approaches to truth (ontology of truth) from the very subjective (Pragmatic theory of truth), to subjective (Coherence theory of truth), then onto objective (Correspondence theory of truth) but remember that this process as limited as it can be, is the best we have and we build one truth ontop another like blocks to a wall of truth.
In a general way, all reality, in a philosophic sense, is an emergent property of reason, and knowing how reason accrues does not remove its warrant. Feelings are experienced then perceived, leading to thinking, right thinking is reason, right reason is logic, right logic is mathematics, right mathematics is physics and from there all science.
Science is quite the opposite of just common sense. To me, common sense in a relative way as it generally relates to the reality of things in the world, will involve “naive realism.” Whereas, most of those who are scientific thinkers, generally hold more to scientific realism or other stances far removed from the limited common sense naive realism. Science is a multidisciplinary methodological quest for truth. Science is understanding what is, while religion is wishing on what is not.
According to the writers of the philosophy website LESSWRONG, possessing absolute certainty in a fact, or Bayesian probability of 1, isn’t a good idea. Losing an epistemic bet made with absolute certainty corresponds to receiving an infinite negative payoff, according to the logarithmic proper scoring rule. The same principle applies to mathematical truths. Confidence levels inside and outside an argument Not possessing absolute certainty in math doesn’t make the math itself uncertain, the same way that an uncertain map doesn’t cause the territory to blur out. The world, and the math, are precise, while knowledge about them is incomplete. The impossibility of justified absolute certainty is sometimes used as a rationalization for the fallacy of gray. Here is a link for Infinite Certainty. ref
According to the writers of the philosophy website LESSWRONG, and Related: Horrible LHC Inconsistency, The Proper Use of Humility But Overconfidence, is a big fear around these parts. Well, it is a known human bias, after all, and therefore something to be guarded against. But what is going to argue is that, at least in aspiring-rationalist circles, people are too afraid of overconfidence, to the point of overcorrecting — which, not surprisingly, causes problems. (Some may detect implications here for the long-standing Inside View vs. Outside View debate.)
[I]f you asked me whether I could make one million statements of authority equal to “The Large Hadron Collider will not destroy the world”, and be wrong, on average, around once, then I would have to say no.
Moreover, according to the writers of LESSWRONG, there may be a reason to now suspect that misleading imagery may be at work here. A million statements — that sounds like a lot, doesn’t it? If you made one such pronouncement every ten seconds, a million of them would require you to spend months doing nothing but pontificating, with no eating, sleeping, or bathroom breaks. Boy, that would be tiring, wouldn’t it? At some point, surely, your exhausted brain would slip up and make an error. In fact, it would surely make more than one — in which case, poof!, there goes your calibration. No wonder, then, that people claim that we humans can’t possibly hope to attain such levels of certainty. Look, they say, at all those times in the past when people — even famous scientists! — said they were 99.999% sure of something, and they turned out to be wrong. My own adolescent self would have assigned high confidence to the truth of Christianity; so where do I get the temerity, now, to say that the probability of this is 1-over-oogles-and-googols? A probability estimate is not a measure of “confidence” in some psychological sense. Rather, it is a measure of the strength of the evidence: how much information you believe you have about reality. So, when judging calibration, it is not really appropriate to imagine oneself, say, judging thousands of criminal trials, and getting more than a few wrong here and there (because, after all, one is human and tends to make mistakes). Let me instead propose a less misleading image: picture yourself programming your model of the world (in technical terms, your prior probability distribution) into a computer, and then feeding all that data from those thousands of cases into the computer — which then, when you run the program, rapidly spits out the corresponding thousands of posterior probability estimates. That is, visualize a few seconds or minutes of staring at a rapidly-scrolling computer screen, rather than a lifetime of exhausting judicial labor. When the program finishes, how many of those numerical verdicts on the screen are wrong?
According to the writers of LESSWRONG, don’t know about you, but modesty seems less tempting when one thinks about it in this way. One can say I have a model of the world, and it makes predictions. For some reason, when it’s just me in a room looking at a screen, I don’t feel the need to tone down the strength of those predictions for fear of unpleasant social consequences. Nor do I need to worry about the computer getting tired from running all those numbers. In the vanishingly unlikely event that Omega was to appear and tell me that, say, Amanda Knox was guilty, it wouldn’t mean that I had been too arrogant and that I had better not trust my estimates in the future. What it would mean is that my model of the world was severely stupid with respect to predicting reality. In which case, the thing to do would not be to humbly promise to be more modest henceforth, but rather, to find the problem and fix it. (computer programmers call this “debugging”.) A “confidence level” is a numerical measure of how stupid your model is, if you turn out to be wrong.
Furthermore, according to the writers of the philosophy website LESSWRONG, the fundamental question of rationality is: why do you believe what you believe? As a rationalist, you can’t just pull probabilities out of your rear end. And now here’s the kicker: that includes the probability of your model being wrong. The latter must, paradoxically but necessarily, be part of your model itself. If you’re uncertain, there has to be a reason you’re uncertain; if you expect to change your mind later, you should go ahead and change your mind now. This is the first thing to remember in setting out to dispose of what I call “quantitative Cartesian skepticism”: the view that even though science tells us the probability of such-and-such is 10-50, well, that’s just too high of a confidence for mere mortals like us to assert; our model of the world could be wrong, after all — conceivably, we might even be brains in vats. Now, it could be the case that 10-50 is too low of a probability for that event, despite the calculations; and it may even be that that particular level of certainty (about almost anything) is in fact beyond our current epistemic reach. But if we believe this, there have to be reasons we believe it, and those reasons have to be better than the reasons for believing the opposite.
According to the writers of the philosophy website LESSWRONG, one can expect that if you probe the intuitions of people who worry about 10-6 being too low of a probability that the Large Hadron Collider will destroy the world — that is, if you ask them why they think they couldn’t make a million statements of equal authority and be wrong on average once — they will cite statistics about the previous track record of human predictions: their own youthful failures and/or things like Lord Kelvin calculating that evolution by natural selection was impossible.
According to the writers of LESSWRONG, the reply is: hindsight is 20/20 — so how about taking advantage of this fact? Previously, the phrase “epistemic technology” was used in reference to our ability to achieve greater certainty through some recently-invented methods of investigation than through others that are native unto us. This, I confess, was an almost deliberate foreshadowing of my thesis here: we are not stuck with the inferential powers of our ancestors. One implication of the Bayesian-Jaynesian-Yudkowskian view, which marries epistemology to physics, is that our knowledge-gathering ability is as subject to “technological” improvement as any other physical process. With effort applied over time, we should be able to increase not only our domain knowledge but also our meta-knowledge. As we acquire more and more information about the world, our Bayesian probabilities should become more and more confident. If we’re smart, we will look back at Lord Kelvin’s reasoning, find the mistakes, and avoid making those mistakes in the future. We will, so to speak, debug the code. Perhaps we couldn’t have spotted the flaws at the time; but we can spot them now. Whatever other flaws may still be plaguing us, our score has improved. In the face of precise scientific calculations, it doesn’t do to say, “Well, science has been wrong before”. If science was wrong before, it is our duty to understand why science was wrong, and remove known sources of stupidity from our model. Once we’ve done this, “past scientific predictions” is no longer an appropriate reference class for second-guessing the prediction at hand, because the science is now superior. (Or anyway, the strength of the evidence of previous failures is diminished.)
According to the writers of LESSWRONG, that is why, with respect to Eliezer’s LHC dilemma — which amounts to a conflict between avoiding overconfidence and avoiding hypothesis-privileging — coming down squarely on the side of hypothesis-privileging as the greater danger. Psychologically, you may not “feel up to” making a million predictions, of which no more than one can be wrong; but if that’s what your model instructs you to do, then that’s what you have to do — unless you think your model is wrong, for some better reason than a vague sense of uneasiness. Without, ultimately, trusting science more than intuition, there’s no hope of making epistemic progress. At the end of the day, you have to shut up and multiply — epistemically as well as instrumentally. ref
Problems in the Basic outline of One’s Epistemology?
“Incorporating a prediction into future planning and decision making is advisable only if we have judged the prediction’s credibility. This is notoriously difficult and controversial in the case of predictions of future climate. By reviewing epistemic arguments about climate model performance, we discuss how to make and justify judgments about the credibility of climate predictions. Possibly proposing arguments that justify basing some judgments on the past performance of possibly dissimilar prediction problems. This encourages a more explicit use of data in making quantitative judgments about the credibility of future climate predictions, and in training users of climate predictions to become better judges of value, goodness, credibility, accuracy, worth or usefulness.” Ref
Definition of epistemic,
of or relating to knowledge or knowing
Certainty: “I know” vs “I believe”
“Certainty is often explicated in terms of indubitability. What makes possible doubting is “the fact that some propositions are exempt from doubt, are as it were like hinges on which those turn.” Do you certainty that you are reading this in English? I would think all are comfortable accepting that this is written in English you would likely say you have knowledge of this. But like knowledge, certainty is an epistemic property of beliefs. Although some philosophers have thought that there is no difference between knowledge and certainty, it has become increasingly common to distinguish them. On this conception, then, certainty is either the highest form of knowledge or is the only epistemic property superior to knowledge. One of the primary motivations for allowing kinds of knowledge less than certainty is the widespread sense that skeptical arguments are successful in showing that we rarely or never have beliefs that are certain (a kind of skeptical argument) but do not succeed in showing that our beliefs are altogether without epistemic worth, there is an argument that skepticism undermines every epistemic status a belief might have and there is an argument that knowledge requires certainty, which we are capable of having. As with knowledge, it is difficult to provide an uncontentious analysis of certainty. There are several reasons for this. One is that there are different kinds of certainty, which are easy to conflate. Another is that the full value of certainty is surprisingly hard to capture. A third reason is that there are two dimensions to certainty: a belief can be certain at a moment or over some greater length of time. There are various kinds of certainty. A belief is psychologically certain when the subject who has it is supremely convinced of its truth. Certainty in this sense is similar to incorrigibility, which is the property a belief has of being such that the subject is incapable of giving it up. But psychological certainty is not the same thing as incorrigibility. A belief can be certain in this sense without being incorrigible; this may happen, for example, when the subject receives a very compelling bit of counterevidence to the (previously) certain belief and gives it up for that reason. Moreover, a belief can be incorrigible without being psychologically certain. For example, a mother may be incapable of giving up the belief that her son did not commit a gruesome murder, and yet, compatible with that inextinguishable belief, she may be tortured by doubt. A second kind of certainty is epistemic. Roughly characterized, a belief is certain in this sense when it has the highest possible epistemic status. Epistemic certainty is often accompanied by psychological certainty, but it need not be. It is possible that a subject may have a belief that enjoys the highest possible epistemic status and yet be unaware that it does. In such a case, the subject may feel less than the full confidence that her epistemic position warrants. I will say more below about the analysis of epistemic certainty and its relation to psychological certainty. Some philosophers also make use of the notion of moral certainty. For example, in the Latin version of Part IV of the Principles of Philosophy, Descartes says that “some things are considered as morally certain, that is, as having sufficient certainty for application to ordinary life, even though they may be uncertain in relation to the absolute power of god”. Thus characterized, moral certainty appears to be epistemic in nature, though it is a lesser status than epistemic certainty. In the French version of this passage, however, Descartes says that “moral certainty is certainty which is sufficient to regulate our behaviour, or which measures up to the certainty we have on matters relating to the conduct of life which we never normally doubt, though we know that it is possible, absolutely speaking, that they may be false”. Understood in this way, it does not appear to be a species of knowledge, given that a belief can be morally certain and yet false. Rather, on this view, for a belief to be morally certain is for it to be subjectively rational to a high degree. Although all three kinds of certainty are philosophically interesting, it is epistemic certainty that has traditionally been of central importance. In what follows, then, I shall focus mainly on this kind of certainty. In general, every indubitability account of certainty will face a similar problem. The problem may be posed as a dilemma: when the subject finds herself incapable of doubting one of her beliefs, either she has good reasons for being incapable of doubting it, or she does not. If she does not have good reasons for being unable to doubt the belief, the type of certainty in question can be only psychological, not epistemic, in nature. On the other hand, if the subject does have good reasons for being unable to doubt the belief, the belief may be epistemically certain. But, in this case, what grounds the certainty of the belief will be the subject’s reasons for holding it, and not the fact that the belief is indubitable. A second problem for indubitability accounts of certainty is that, in one sense, even beliefs that are epistemically certain can be reasonably doubted. According to a second conception, a subject’s belief is certain just in case it could not have been mistaken—i.e., false. Alternatively, the subject’s belief is certain when it is guaranteed to be true. This is “truth-evaluating” sense of certainty. As with the claim of knowing that a proposition is certain, which entails that such a proposition is a true proposition or the claim of knowing is inacurate. Certainty is, significantly stronger than lesser forms of knowledge.” Ref
Conceptions of Certainty?
*In a general way a Fallibilistic Conception of Certainty (“Self-presenting/Self-evident”) could be stated as one’s belief is guaranteed to be true when attempting to provide an account of fallibilistic knowledge (i.e., knowledge that is less than certain). According to the standard account, the subject has fallibilistic knowledge that a proposition is true when she knows that a proposition is true on the basis of some justification, and yet the subject’s belief could have been false while still held on the basis of their justification offered. Alternatively, the subject knows that a proposition is true on the basis of some justification offered, but that justification offered does not entail the truth that a proposition is true. The problem with the standard account, in either version, is that it does not allow for fallibilistic knowledge of necessary truths. If it is necessarily true that a proposition is true, then the subject’s belief that a proposition is true could not have been false, regardless of what their justification for it may be like. And, if it is necessarily true that a proposition is true, then everything—including the subject’s justification for their belief—will entail or guarantee that a proposition is true. Our attempt to account for certainty encounters the opposite problem: it does not allow for a subject to have a belief regarding a necessary truth that does not count as certain. If the belief is necessarily true, it cannot be false—even when the subject has come to hold the belief for a very bad reason (say, as the result of guessing or wishful thinking). And, given that the beliefs are necessarily true, even these bad grounds for holding the belief will entail or guarantee that it is true. The best way to solve the problem for the analysis of fallibilistic knowledge is to focus, not on the entailment relation, but rather on the probabilistic relation holding between the subject’s justification and the proposition believed. When the subject knows that a proposition is true (claims) on the basis of justification from an offered justification is less than required for full confirmation, the subject’s knowledge is fallibilistic. (Although epistemologists will disagree about what the appropriate conception of probability is, here is a crude example of how probability may figure in a fallibilistic epistemology. Ref
*In a general way a Falsificationism Conception of Certainty (“Self-presenting/Self-evident”) could be stated as one’s belief is guaranteed to be true (if falsifiable, ie. testable) if it is possible to conceive of an observation or an argument which could negate them, thus synonymous to testability. Statements, hypotheses, or theories have falsifiability or refutability if there is the inherent possibility that they can be proven false. They are falsifiable if it is possible to conceive of an observation or an argument which could negate them. In this sense, falsify is synonymous with nullify, meaning to invalidate or “show to be false”. For example, by the problem of induction, no number of confirming observations can verify a universal generalization, such as All swans are white, since it is logically possible to falsify it by observing a single black swan. Thus, the term falsifiability is sometimes synonymous to testability. Some statements, such as It will be raining here in one million years, are falsifiable in principle, but not in practice. The concern with falsifiability gained attention by way of philosopher of scienceKarl Popper‘s scientific epistemology “falsificationism“. Popper stresses the problem of demarcation—distinguishing the scientific from the unscientific—and makes falsifiability the demarcation criterion, such that what is unfalsifiable is classified as unscientific, and the practice of declaring an unfalsifiable theory to be scientifically true is pseudoscience. Ref
Epistemically Rational Beliefs
Which is more epistemically rational, believing that which by lack of evidence could be false or disbelieving that which by insufficient evidence could be true?
“Epistemic rationality is part of rationality involving, achieving accurate beliefs about the world. It involves updating on receiving new evidence, mitigating cognitive biases, and examining why you believe what you believe.” Ref
To me the choice is to use the “Ethics of Belief” and thus the more rational approach one would be more motivated is to disbelieve, rather than “Believing that which by lack of evidence could be false”, otherwise you would accept any statement or claim as true no matter how at odds with other verified facts. The ethics of belief refers to a cluster of related issues that focus on standards of rational belief, intellectual excellence, and conscientious belief-formation as well as norms of some sort governing our habits of belief-formation, belief-maintenance, and belief-relinquishment. Contemporary discussions of the ethics of belief stem largely from a famous nineteenth-century exchange between the British mathematician and philosopher W. K. Clifford and the American philosopher William James. . In 1877 Clifford published an article titled “The Ethics of Belief” in a journal called Contemporary Review. There Clifford argued for a strict form of evidentialism that he summed up in a famous dictum: “It is wrong always, everywhere, and for anyone to believe anything on insufficient evidence.” As Clifford saw it, people have intellectual as well as moral duties, and both are extremely demanding. People who base their beliefs on wishful thinking, self-interest, blind faith, or other such unreliable grounds are not merely intellectually slovenly; they are immoral. Such bad intellectual habits harm both themselves and society. We sin grievously against our moral and intellectual duty when we form beliefs on insufficient evidence, or ignore or dismiss evidence that is relevant to our beliefs. 1, 2
Philosophical Skepticism, Solipsism and the Denial of Reality or Certainty
I want to clarify that I am an an Ignostic, Axiological Atheist and Rationalist who uses methodological skepticism. I hold that there is valid and reliable reason and evidence to warrant justified true belief in the knowledge of the reality of external world and even if some think we don’t we do have axiological and ethical reasons to believe or act as if so.
Thinking is occurring and it is both accessible as well as guided by what feels like me; thus, it is rational to assume I have a thinking mind, so, I exist.
But, some skeptics challenge reality or certainty (although are themselves appealing to reason or rationality that it self they seem to accept almost a priori themselves to me). Brain in a vat or jar, Evil Demon in your mind, Matrix world as your mind, & Hologram world as your reality are some arguments in the denial or challenge of reality or certainty.
The use of “Brain in a vat” type thought experiment scenarios are common as an argument for philosophical skepticism and solipsism, against rationalism and empiricism or any belief in the external world’s existence.
Such thought experiment arguments do have a value are with the positive intent to draw out certain features or remove unreasoned certainty in our ideas of knowledge, reality, truth, mind, and meaning. However, these are only valuable as though challenges to remember the need to employ Disciplined-Rationality and the ethics of belief, not to take these thought experiment arguments as actual reality. Brain in a vat/jar, Evil Demon, Matrix world, and Hologram world are logical fallacies if assumed as a reality representations.
First is the problem that they make is a challenge (alternative hypotheses) thus requiring their own burden of proof if they are to be seen as real.
Second is the problem that they make in the act of presupposition in that they presuppose the reality of a real world with factual tangible things like Brains and that such real things as human brains have actual cognition and that there are real world things like vats or jars and computers invented by human beings with human real-world intelligence and will to create them and use them for intellectually meaningful purposes.
Third is the problem of valid and reliable slandered as doubt is an intellectual professes needing to offer a valid and reliable slandered to who, what, why, and how they are proposing Philosophical Skepticism, Solipsism and the Denial of Reality or Certainty. Though one cannot on one had say I doubt everything and not doubt even that. One cannot say nothing can be known for certain, as they violate this very thought, as they are certain there is no certainty. The ability to think of reasonable doubt (methodological Skepticism) counteracts the thinking of unreasonable doubt (Philosophical Skepticism’s external world doubt and Solipsism). Philosophical skepticism is a method of reasoning which questions the possibility of knowledge is different than methodological skepticism is a method of reasoning, which questions knowledge claims with the goal finding what has warrant, justification to validate the truth or false status of beliefs or propositions.
Fourth is the problem that external world doubt and Solipsism creates issues of reproducibility, details and extravagancy. Reproducibility such as seen in experiments, observation and real world evidence, scientific knowledge, scientific laws, and scientific theories. Details such as the extent of information to be contained in one mind such as trillions of facts and definable data and/or evidence. And extravagancy such as seen in the unreasonable amount of details in general and how that also brings the added strain to reproducibility and memorability. Extravagancy in the unreasonable amount of details also interacts with axiological and ethical reasoning such as why if there is no real world would you create rape, torture, or suffering of almost unlimited variations. Why not just rape but child rape not just torture but that of innocent children who would add that and the thousands of ways it can and does happen in the external world. Extravagancy is unreasonable, why a massive of cancers and infectious things, millions of ways to be harmed, suffer and die etc. There is a massive amount of extravagancy in infectious agents if the external world was make-believe because of infectious agents come in an unbelievable variety of shapes, sizes and types like bacteria, viruses, fungi, protozoa, and parasites. Therefore, the various types of pleasure and pain both seem an unreasonable extravagancy in a fake external world therefore the most reasonable conclusion is the external world is a justified true belief.
Fifth is the problem that axiological or ethical thinking would say we only have what we understand and must curtail behavior ethically to such understanding. Think of ability to give consent having that reasoning ability brings with it the requirement of being responsible for our behaviors. If one believes the external world is not real, they remove any value (axiology) in people, places or things and if the external world is not real there is no behavior or things to interact with (ethics) so nothing can be helped or harmed by actions as there is no actions or ones acting them or having them acting for or against. In addition, if we do not know is we are actually existing or behaving in the real world we also are not certain we are not either, demanding that we must act as if it is real (pragmatically) do to ethical and axiological concerns which could be true. Because if we do act ethically and the reality of the external world is untrue we have done nothing but if we act unethical as if the reality of the external world is untrue and it is in fact real we have done something to violate ethics. Then the only right way to navigate the ethics of belief in such matters would say one should behave as though the external world is real. In addition, axiological or ethical thinking and the cost-benefit analysis of belief in the existence of the external world support and highly favors belief in the external world’s existence.
Solipsism (from Latin solus, meaning “alone”, and ipse, meaning “self”) is the philosophical idea that only one’s own mind is sure to exist. To me, solipsism is trying to limit itself to rationalism only to, of, or by itself. Everyone, including a Solipsist, as the mind to which all possible knowledge flows; consider this, if you think you can reject rational thinking as the base of everything, what other standard can you champion that does not at its core return to the process of mind as we do classify people by intelligence. If you cannot use rationalism what does this mean, irrationalism? A Solipsist, is appealing to rationalism as we only have our mind or the minds of others to help navigate the world accurately as possible.
I am a Rationalist?
Pragmatic theory of truth, Coherence theory of truth, and Correspondence theory of truth?
While hallucinogens are associated with shamanism, it is alcohol that is associated with paganism.
The Atheist-Humanist-Leftist Revolutionaries Shows in the prehistory series:
Show six: Emergence of hierarchy, sexism, slavery, and the new male god dominance: Paganism 7,000-5,000 years old: related to “Anarchism and Socialism” (Capitalism) (World War 0) Elite and their slaves!
Show eight: Paganism 4,000 years old: Moralistic gods after the rise of Statism and often support Statism/Kings: related to “Anarchism and Socialism” (First Moralistic gods, then the Origin time of Monotheism)
Prehistory: related to “Anarchism and Socialism” the division of labor, power, rights, and recourses: VIDEO
Pre-animism 300,000 years old and animism 100,000 years old: related to “Anarchism and Socialism”: VIDEO
Totemism 50,000 years old: related to “Anarchism and Socialism”: VIDEO
Shamanism 30,000 years old: related to “Anarchism and Socialism”: VIDEO
Paganism 12,000 years old: related to “Anarchism and Socialism” (Pre-Capitalism): VIDEO
Paganism 7,000-5,000 years old: related to “Anarchism and Socialism” (Capitalism) (World War 0) Elite and their slaves: VIEDO
Paganism 5,000 years old: progressed organized religion and the state: related to “Anarchism and Socialism” (Kings and the Rise of the State): VIEDO
Paganism 4,000 years old: related to “Anarchism and Socialism” (First Moralistic gods, then the Origin time of Monotheism): VIEDO
I do not hate simply because I challenge and expose myths or lies any more than others being thought of as loving simply because of the protection and hiding from challenge their favored myths or lies.
The truth is best championed in the sunlight of challenge.
An archaeologist once said to me “Damien religion and culture are very different”
My response, So are you saying that was always that way, such as would you say Native Americans’ cultures are separate from their religions? And do you think it always was the way you believe?
I had said that religion was a cultural product. That is still how I see it and there are other archaeologists that think close to me as well. Gods too are the myths of cultures that did not understand science or the world around them, seeing magic/supernatural everywhere.
I personally think there is a goddess and not enough evidence to support a male god at Çatalhöyük but if there was both a male and female god and goddess then I know the kind of gods they were like Proto-Indo-European mythology.
*Next is our series idea that was addressed in, Anarchist Teaching as Free Public Education or Free Education in the Public: VIDEO
Our future video series: Organized Oppression: Mesopotamian State Force and the Politics of power (9,000-4,000 years ago) adapted from: The Complete and Concise History of the Sumerians and Early Bronze Age Mesopotamia (7000-2000 BC): https://www.youtube.com/watch?v=szFjxmY7jQA
The “Atheist-Humanist-Leftist Revolutionaries”
Cory Johnston ☭ Ⓐ Atheist Leftist @Skepticallefty & I (Damien Marie AtHope) @AthopeMarie (my YouTube & related blog) are working jointly in atheist, antitheist, antireligionist, antifascist, anarchist, socialist, and humanist endeavors in our videos together, generally, every other Saturday.
Why Does Power Bring Responsibility?
Think, how often is it the powerless that start wars, oppress others, or commit genocide? So, I guess the question is to us all, to ask, how can power not carry responsibility in a humanity concept? I know I see the deep ethical responsibility that if there is power their must be a humanistic responsibility of ethical and empathic stewardship of that power. Will I be brave enough to be kind? Will I possess enough courage to be compassionate? Will my valor reached its height of empathy? I as everyone earns our justified respect by our actions, that are good, ethical, just, protecting, and kind. Do I have enough self-respect to put my love for humanity’s flushing, over being brought down by some of its bad actors? May we all be the ones doing good actions in the world, to help human flourishing.
I create the world I want to live in, striving for flourishing. Which is not a place but a positive potential involvement and promotion; a life of humanist goal precision. To master oneself, also means mastering positive prosocial behaviors needed for human flourishing. I may have lost a god myth as an atheist but I am happy to tell you my friend, it is exactly because of that, leaving the mental terrorizer, god belief that I truly regained my connected ethical as well as kind humanity.
Cory and I will talk about prehistory and theism, addressing the relevance to atheism, anarchism, and socialism.
At the same time of the rise of the male god 7,000 years ago was also the very time there was the rise of violence war, and clans to kingdoms, then empires, then states. It is all connected back to 7,000 years ago and it mover across the world.
The Mind of a Skeptical Leftist (YouTube)
Cory Johnston: Mind of a Skeptical Leftist @Skepticalcory
The Mind of a Skeptical Leftist By Cory Johnston: “Promoting critical thinking, social justice, and left-wing politics by covering current events and talking to a variety of people. Cory Johnston has been thoughtfully talking to people and attempting to promote critical thinking, social justice, and left-wing politics.”
He needs our support. We rise by helping each other.
Damien Marie AtHope (“At Hope”) Axiological Atheist, Anti-theist, Anti-religionist, Secular Humanist. Rationalist, Writer, Artist, Poet, Philosopher, Advocate, Activist, Psychology, and Armchair Archaeology/Anthropology/Historian.
Damien is interested in: Freedom, Liberty, Justice, Equality, Ethics, Humanism, Science, Atheism, Antiteism, Antireligionism, Ignosticism, Left-Libertarianism, Anarchism, Socialism, Mutualism, Axiology, Metaphysics, LGBTQI, Philosophy, Advocacy, Activism, Mental Health, Psychology, Archaeology, Social Work, Sexual Rights, Marriage Rights, Woman’s Rights, Gender Rights, Child Rights, Secular Rights, Race Equality, Ageism/Disability Equality, Etc. And a far-leftist, “Anarcho-Humanist.”
Damien Marie AtHope (Said as “At” “Hope”)/(Autodidact Polymath but not good at math):
Axiological Atheist, Anti-theist, Anti-religionist, Secular Humanist, Rationalist, Writer, Artist, Jeweler, Poet, “autodidact” Philosopher, schooled in Psychology, and “autodidact” Armchair Archaeology/Anthropology/Pre-Historian (Knowledgeable in the range of: 1 million to 5,000/4,000 years ago). I am an anarchist socialist politically. Reasons for or Types of Atheism