Cognitive science is the interdisciplinary scientific study of minds as information processors. It includes research on how information is processed (in faculties such as perception, language, reasoning, and emotion), represented, and transformed in a (human or other animal) nervous system or machine (e.g., computer). Cognitive science consists of multiple research disciplines, including psychology, artificial intelligence, philosophy, neuroscience, linguistics, anthropology, sociology, and education. It spans many levels of analysis, from low-level learning and decision mechanisms to high-level logic and planning; from neural circuitry to modular brain organization. The term cognitive science was coined by Christopher Longuet-Higgins in his 1973 commentary on the Lighthill report, which concerned the then-current state of Artificial Intelligence research.In the same decade, the journal Cognitive Science and the Cognitive Science Society were founded.Cognitive science has a pre-history traceable back to ancient Greek philosophical texts (see Plato’s Meno); and certainly must include writers such as Descartes, David Hume, Immanuel Kant, Benedict de Spinoza, Nicolas Malebranche, Pierre Cabanis, Leibniz and John Locke. But, although these early writers contributed greatly to the philosophical discovery of mind and this would ultimately lead to the development of psychology, they were working with an entirely different set of tools and core concepts than those of the cognitive scientist.
A society is a group of people who form a semi-closed system. At its simplest, the term society refers to a large group of people sharing their own culture and institutions. A society, then, is a network of relationships between people. The English word society is derived from the French société, which, in turn, had its origin in the Latin societas, a “friendly association with others,” from socius meaning “companion, associate, comrade or business partner.” Thus, the meaning of society is closely related to what is considered to be social. Implicit in the meaning of society is that its members may share some mutual concern or interest, a common objective or common characteristics. The social sciences generally use the term society to mean a group of people who form a semi-closed social system, in which most interactions are with other individuals belonging to the group. More abstractly, a society is defined as a network of relationships between social entities. A society is also sometimes defined as an interdependent community, but the sociologist Tönnies sought to draw a contrast between society and community. An important feature of society is social structure, aspects of which include roles and social ranking.
The social sciences are a group of academic disciplines that study human aspects of the world. They differ from the arts and humanities in that the social sciences tend to emphasize the use of the scientific method in the study of humanity, including quantitative and qualitative methods.
Anthropology (play /ænθrɵˈpɒlədʒi/) is the study of humanity. It has origins in the humanities, the natural sciences, and the social sciences.The term “anthropology” is from the Greek anthrōpos (ἄνθρωπος), “human being”, and -logia (-λογία), “discourse” or “study”, and was first used in 1501 by German philosopher Magnus Hundt.
Anthropology’s basic concerns are “What defines Homo sapiens?”, “Who are the ancestors of modern Homo sapiens?”, “What are humans’ physical traits?”, “How do humans behave?”, “Why are there variations and differences among different groups of humans?”, “How has the evolutionary past of Homo sapiens influenced its social organization and culture?” and so forth.
In the United States, contemporary anthropology is typically divided into four sub-fields: cultural anthropology also known as social anthropology, archaeology, linguistic anthropology, and physical (or biological) anthropology. The four-field approach to anthropology is reflected in many undergraduate textbooks as well as anthropology programs (e.g. Michigan, Berkeley, Penn). At universities in the United Kingdom, and much of Europe, these “sub-fields” are frequently housed in separate departments and are seen as distinct disciplines.
The social and cultural sub-field has been heavily influenced by structuralist and post-modern theories, as well as a shift toward the analysis of modern societies (an arena more typically in the remit of sociologists). During the 1970s and 1990s there was an epistemological shift away from the positivist traditions that had largely informed the discipline. During this shift, enduring questions about the nature and production of knowledge came to occupy a central place in cultural and social anthropology. In contrast, archaeology, biological anthropology, and linguistic anthropology remained largely positivist. Due to this difference in epistemology, anthropology as a discipline has lacked cohesion over the last several decades. This has even led to departments diverging, for example in the 1998–9 academic year at Stanford University, where the “scientists” and “non-scientists” divided into two departments: anthropological sciences and cultural & social anthropology; these departments later reunified in the 2008–9 academic year.
Technology Management is set of management disciplines that allows organizations to manage its technological fundamentals to create competitive advantage. Typical concepts used in technology management are technology strategy (a logic or role of technology in organization), technology forecasting (identification of possible relevant technologies for the organization, possibly through technology scouting), technology roadmapping (mapping technologies to business and market needs), technology project portfolio ( a set of projects under development) and technology portfolio (a set of technologies in use).
The role of the technology management function in an organization is understand the value of certain technology for the organization. Continuous development of technology is valuable as long as there is a value for the customer and therefore the technology management function in an organization should be able to argue when to invest on technology development and when to withdraw.
Technology Management can also be defined as the integrated planning, design, optimization, operation and control of technological products, processes and services, a better definition would be the management of the use of technology for human advantage.
The Association of Technology, Management, and Applied Engineering defines Technology Management as the field concerned with the supervision of personnel across the technical spectrum and a wide variety of complex technological systems. Technology Management programs typically include instruction in production and operations management, project management, computer applications, quality control, safety and health issues, statistics, and general management principles.
Transhumanism, often abbreviated as H+ or h+, is an international intellectual and cultural movement that affirms the possibility and desirability of fundamentally transforming the human condition by developing and making widely available technologies to eliminate aging and to greatly enhance human intellectual, physical, and psychological capacities. Transhumanist thinkers study the potential benefits and dangers of emerging technologies that could overcome fundamental human limitations, as well as study the ethical matters involved in developing and using such technologies. They predict that human beings may eventually be able to transform themselves into beings with such greatly expanded abilities as to merit the label “posthuman”.Transhumanism is therefore viewed as a subset of philosophical “posthumanism”.
The contemporary meaning of the term “transhumanism” was foreshadowed by one of the first professors of futurology, FM-2030, who taught “new concepts of the Human” at The New School of New York City in the 1960s, when he began to identify people who adopt technologies, lifestyles and world views transitional to “posthumanity” as “transhuman”. This foresight would lay the intellectual groundwork for British philosopher Max More to begin articulating the principles of transhumanism as a futurist philosophy in 1990, and organizing in California an intelligentsia that has since grown into the worldwide transhumanist movement.
The transhumanist vision of a transformed future humanity, which is influenced by the techno-utopias depicted in some great works of science fiction, has attracted many supporters and detractors from a wide range of perspectives. Transhumanism has been condemned by one critic, Francis Fukuyama, as the world’s most dangerous idea, while one proponent, Ronald Bailey, counters that it is the “movement that epitomizes the most daring, courageous, imaginative, and idealistic aspirations of humanity”.
Technological singularity refers to the hypothetical future emergence of greater-than human intelligence. Since the capabilities of such an intelligence would be difficult for an unaided human mind to comprehend, the occurrence of technological singularity is seen as an intellectual event horizon, beyond which the future becomes difficult to understand or predict. Nevertheless, proponents of the singularity typically anticipate such an event to precede an “intelligence explosion”, wherein superintelligences design successive generations of increasingly powerful minds. The term was coined by science fiction writer Vernor Vinge, who argues that artificial intelligence, human biological enhancement or brain-computer interfaces could be possible causes for the singularity. The concept is popularized by futurists like Ray Kurzweil and widely expected by proponents to occur in the early to mid twenty first century.Kurzweil writes that, due to paradigm shifts, a trend of exponential growth extends Moore’s law from integrated circuits to earlier transistors, vacuum tubes, relays, and electromechanical computers. He predicts that the exponential growth will continue, and that in a few decades the computing power of all computers will exceed that of human brains, with superhuman artificial intelligence appearing around the same time.
Many of the most recognized writers on the singularity, such as Vernor Vinge and Ray Kurzweil, define the concept in terms of the technological creation of superintelligence, and argue that it is difficult or impossible for present-day humans to predict what a post-singularity world would be like, due to the difficulty of imagining the intentions and capabilities of superintelligent entities. The term “technological singularity” was originally coined by Vinge, who made an analogy between the breakdown in our ability to predict what would happen after the development of superintelligence and the breakdown of the predictive ability of modern physics at the space-time singularity beyond the event horizon of a black hole. Some writers use “the singularity” in a broader way to refer to any radical changes in our society brought about by new technologies such as molecular nanotechnology, although Vinge and other prominent writers specifically state that without superintelligence, such changes would not qualify as a true singularity. Many writers also tie the singularity to observations of exponential growth in various technologies (with Moore’s Law being the most prominent example), using such observations as a basis for predicting that the singularity is likely to happen sometime within the 21st century.
Technological nationalism is the belief that Canada’s existence as a sovereign, independent nation hinges on its use of communication technology. Communication theorist Maurice Charland developed this concept in relation to the construction of the Canadian Pacific Railway (CPR).
Canada’s greatest challenge in the 19th century was to unite the country across a continent. The construction of the CPR (from 1881 to 1885) was a deliberate political and economic attempt to unite Canada’s regions and link Eastern and Western Canada, the heartland and hinterland respectively. Charland identified this project as based on the nation’s faith in technology’s ability to overcome physical obstacles. As the technology was adapted to suit Canadian needs, it fed the national rhetoric that railroads were an integral part of nation building. This spirit of technological nationalism also fuelled the development of broadcasting in the country and thus further served in the development of a national identity. Paradoxically however, these technologies, which historian Harold Innis termed “space-binding,” simultaneously supported and undermined the development of a Canadian nation. Based in connection rather than content, they did not favour any particular set of values, except those arising from trade and communication themselves, and so they also contributed to Canada’s integration into first the British, and then the American empire.
Technological determinism is a reductionist theory that presumes that a society’s technology drives the development of its social structure and cultural values. The term is believed to have been coined by Thorstein Veblen (1857–1929), an American sociologist. The most radical technological determinist in America in the twentieth century was most likely Clarence Ayres who was a follower of Thorstein Veblen and John Dewey. William Ogburn was also known for his radical technological determinism.The term is believed to have been coined by Thorstein Veblen (1857–1929), an American. Veblen’s contemporary, popular historian Charles Beard, provided this apt determinist image, “Technology marches in seven-league boots from one ruthless, revolutionary conquest to another, tearing down old factories and industries, flinging up new processes with terrifying rapidity.”Most interpretations of technological determinism share two general ideas:
that the development of technology itself follows a predictable, traceable path largely beyond cultural or political influence, and
that technology in turn has “effects” on societies that are inherent, rather than socially conditioned or produced because that society organizes itself to support and further develop a technology once it has been introduced.
Strict adherents to technological determinism do not believe the influence of technology differs based on how much a technology is or can be used. Instead of considering technology as part of a larger spectrum of human activity, technological determinism sees technology as the basis for all human activity.
Technological determinism has been summarized as ‘The belief in technology as a key governing force in society …’ (Merritt Roe Smith). ‘The idea that technological development determines social change …’ (Bruce Bimber). It changes the way people think and how they interact with others and can be described as ‘…a three-word logical proposition: “Technology determines history”‘ (Raymond Williams) . It is, ‘… the belief that social progress is driven by technological innovation, which in turn follows an “inevitable” course.’ (Michael L. Smith). This ‘idea of progress’ or ‘doctrine of progress’ is centralised around the idea that social problems can be solved by technological advancement, and this is the way that society moves forward. Technological determinists believe that “‘You can’t stop progress’, implying that we are unable to control technology” (Lelia Green). This suggests that we are somewhat powerless and society allows technology to drive social changes because, “societies fail to be aware of the alternatives to the values embedded in it [technology]” (Merritt Roe Smith).
Technocriticism is a branch of critical theory devoted to the study of technological change.
Technocriticism treats technological transformation as historically specific changes in personal and social practices of research, invention, regulation, distribution, promotion, appropriation, use, and discourse, rather than as an autonomous or socially indifferent accumulation of useful inventions, or as an uncritical narrative of linear “progress”, “development” or “innovation”.
Technocriticism studies these personal and social practices in their changing practical and cultural significance. It documents and analyzes both their private and public uses, and often devotes special attention to the relations among these different uses and dimensions. Recurring themes in technocritical discourse include the deconstruction of essentialist concepts such as “health”, “human”, “nature” or “norm”.
Technocritical theory can be either “descriptive” or “prescriptive” in tone. Descriptive forms of technocriticism include some scholarship in the history of technology, science and technology studies, cyberculture studies and philosophy of technology. More prescriptive forms of technocriticism can be found in the various branches of technoethics, for example, media criticism, infoethics, bioethics, neuroethics, roboethics, nanoethics, existential risk assessment and some versions of environmental ethics and environmental design theory.
Figures engaged in technocritical scholarship and theory include Donna Haraway and Bruno Latour (who work in the closely related field of science studies), N. Katherine Hayles (who works in the field of Literature and Science), Phil Agree and Mark Poster (who works in intellectual history), Marshall McLuhan and Friedrich Kittler (who work in the closely related field of media studies), Susan Squier and Richard Doyle (who work in the closely related field of medical sociology), and Hannah Arendt, Walter Benjamin, Martin Heidegger, and Michel Foucault (who sometimes wrote about the philosophy of technology). Technocriticism can be juxtaposed with a number of other innovative interdisciplinary areas of scholarship which have surfaced in recent years such as technoscience and technoethics.
Technocracy is a form of government in which engineers, scientists, health professionals, and other technical experts are in control of decision making in their respective fields. The term technocracy derives from the Greek words tekhne meaning skill and kratos meaning power, as in government, or rule. Thus the term technocracy denotes a system of government where those who have knowledge, expertise or skills compose the governing body. In a technocracy decision makers would be selected based upon how highly knowledgeable they are, rather than how much political capital they hold.
Technocrats are individuals with technical training and occupations who perceive many important societal problems as being solvable, often while proposing technology-focused solutions. The administrative scientist Gunnar K. A. Njalsson theorizes that technocrats are primarily driven by their cognitive “problem-solution mindsets” and only in part by particular occupational group interests. Their activities and the increasing success of their ideas are thought to be a crucial factor behind the modern spread of technology and the largely ideological concept of the “information society”. Technocrats may be distinguished from “econocrats” and “bureaucrats” whose problem-solution mindsets differ from those of the technocrats.
In all cases technical and leadership skills are selected through bureaucratic processes on the basis of specialized knowledge and performance, rather than democratic election by those without such knowledge or skill deemed necessary. Some forms of technocracy are a form of meritocracy, a system where the “most qualified” and those who decide the validity of qualifications are the same people. Other forms have been described as not being an oligarchic human group of controllers, but rather an administration by science without the influence of special interest groups.