liberal arts

Thursday, May 24, 2007

architect

An architect is a person who is involved in the planning, designing and oversight of a building's construction. The word "architect" is derived from the Latin architectus or from the Greek arkhitekton. In the broadest sense an architect is a person who translates the user's needs into the builder's requirements. An architect must thoroughly understand the building and operational codes under which his or her design must conform. That degree of knowledge is necessary so that he or she is not apt to omit any necessary requirements, or produce improper, conflicting, ambiguous, or confusing requirements. Architects must understand the various methods available to the builder for building the client's structure, so that he or she can negotiate with the client to produce a best possible compromise of the results desired within explicit cost and time boundaries.

Architects must frequently make building design and planning decisions that affect the safety and well being of the general public. Architects are required to obtain specialized education and documented work experience to obtain licensure, similar to the requirements for other professionals, with requirements for practice varying from place to place (see below).

The most prestigious award a living architect can receive is payment from his clients and, after that, maybe the Pritzker Prize, often termed the "Nobel Prize for architecture." Other awards for excellence in architecture are given by national regional professional associations such as the American Institute of Architects and Royal Institute of British Architects. Other prestigious architectural awards are the Alvar Aalto Medal (Finland) and the Carlsberg Architecture Prize (Denmark).

Although "architect" technically and by law refers to a licensed professional, the word is frequently mis-used in the broader sense noted above to define someone who brings order to a built or non-built situation.

Professional requirements

United States

In the United States, people wishing to become licensed architects are required to meet the requirements of their respective state. Each state has a registration board to oversee that state's licensure laws. In 1919, the National Council of Architectural Registration Boards (NCARB) was created to ensure parity between the states' often conflicting rules. The registration boards of each of the 50 states (and 5 territories), are NCARB member boards.

Requirements vary between jurisdictions, and there are three common requirements for registration: education, experience and examination. About half of the States require a professional degree from a school accredited by the NAAB to satisfy their education requirement; this would be either a B.Arch or M.Arch degree. The experience requirement for degreed candidates is typically the Intern Development Program (IDP), a joint program of NCARB and the AIA. IDP creates a framework to identify for the intern-architect base skills and core-competencies. The intern-architect needs to earn 700 training units (TUs) diversified into 16 categories; each TU is equivalent to 8 hours of experience working under the direct supervision of a licensed Architect. The states that waive the degree requirement typically require a full 10 years experience in combination with the I.D.P divesification requirements before the candidate is eligible to sit for the examination. California requires C-IDP (Comprehensive Intern Development Program) which builds upon the seat time requirement of IDP with the need to document learning having occurred. All jurisdictions use the Architect Registration Examination (ARE), a series of nine computerized exams administered by NCARB. The NCARB also has a certification for those architects meeting NCARB's model standard: NAAB degree, IDP and ARE passage. This certificate facilitates reciprocity between the member boards should an architect desire registration in a different jurisdiction. All architects licensed by their respective states have professional status as Registered Architects (RA).

There are three types of professional degrees in architecture in the United States: the Bachelor of Architecture, Master of Architecture, and the Doctorate degrees — either Doctor of Architecture, Doctor of Design or Doctor of Philosophy, respectively abbreviated as "B.Arch," "M.Arch," and "D.Arch.", "D.Des." or "Ph.D." Non-professional degrees include the Bachelor of Arts in Architecture (BA), Bachelor of Science in Architecture (BS), Bachelor of Fine Arts in Architecture (BFA Arch), and "Bachelor of Environmental Design" (B.Envd). A non-professional degree typically takes four years to complete (as opposed to five years for a Bachelor of Architecture) and may be part of the later completion of professional degree (A "4+2" plan is comprised of a 4-year BA and a 2-year Master of Architecture). The 5-year BArch and 6-year MArch are regarded as virtual equals in the registration and accreditation processes. Other programs (such as those offered at Drexel University and Boston Architectural College) combine the required educational courses with the work component necessary to sit for licensure exams. Programs such as this often afford students the ability to immediately test for licensure upon graduation, as opposed to having to put in several years working in the field after graduation before being able to get licensed, as is common in more traditional programs.

Depending on the policies of the registration board for the state in question, it is sometimes possible to become licensed as an Architect in other ways: reciprocal licensure for over-seas architects and working under an architect as an intern for an extended period of time.

Professional organizations for Architects in the United States include:

* The American Institute of Architects is a professional organization representing architects licensed in the United States, and offers its members services such as continuing education programs, standard contracts and other practice-related documents, and design award programs. The AIA is not directly involved with the professional licensing of architects, although AIA members usually place the suffix "AIA" after their names.

* The Society of American Registered Architects or SARA is another professional organization for registered architects in the United States. Its activities and services include conventions, continuing education programs, standard contracts and other practice-related documents, and design award programs. Members of this organization may have the suffix "SARA" after their name.

* The National Organization of Minority Architects or NOMA is an organization for minority registered architects and minority architectural students in the United States. It was created in 1971 to bring light to the contributions of African Americans and other minorities in the field of architecture in the United States and the world.

US Earnings Outlook

According to the 2006–2007 Occupation Outlook Handbook published by the US Department of Labor, the median salary of architects was $62,960 with the middle 50% earning between $46,690 and $79,230. This was slightly above accountants (median income $50,770), college professors (median income $51,800) and on par with most branches of engineering (median income of roughly $60K). Senior architects and partners in mid to large size firms in many urban areas typically have earnings that exceed $100K annually. Principals in larger national firms may have incomes two or three times that figure or more, comparable to many executive management positions.

Many architects elect to move into real estate development, corporate planning, project management and other specialized roles which can earn significantly higher income than the industry median.

United Kingdom

In the United Kingdom the title "architect" is protected by law, and only those who have the recognised qualifications ratified by the Architects Registration Board [1]in conjunction with the Royal Institute of British Architects are allowed to call themselves architects. In the United Kingdom it takes a minimum of seven years to train to be an architect. Those wishing to become architects must first study at a recognized university-level school of architecture. Though there are some variations from university to university, the basic principle is that in order to qualify as an architect one must pass through three stages:

1. On completing a three year B.A, B.Arch or B.Sc degree in architecture the candidate receives exemption from RIBA Part I. There then follows a period of a minimum of one year which the candidate spends in an architect's office gaining work experience.
2. The candidate must then complete a post-graduate university course, usually two years, to receive either a Post Graduate Diploma (Dip. Arch) or Masters (M.Arch). On completing that course, the candidate receives exemption Part II of the RIBA process.
3. The candidate must then spend a further period of at least one year gaining experience before being allowed to take the RIBA Part III examination in Professional Practice and Management.[2]

Australia

In Australia the title of architect is legally protected and architects are registered through state boards. These boards are affiliated through the Architects Accreditation Council of Australia (AACA) [3]. The AACA also provides accreditation for schools and assessments for architects with overseas qualifications for the purposes of migration.

There are three key requirements for registration: a professional degree from a school of architecture accredited by the AACA; at least two years of practical experience, and; the completion of the architectural practice examination.

Architects may also belong to the Royal Australian Institute of Architects which is the professional organization and members use the suffix RAIA after their name.

Canada

In Canada, architects are required to belong to provincial architectural associations that require them to complete an accredited degree in architecture, finish a multi-year internship process, pass a series of exams, and pay an annual fee to acquire and maintain a license to practice.

The Royal Architectural Institute of Canada (RAIC) [4] is a national body that aims to be "the voice of Architecture and its practice in Canada". Members are permitted to use the suffix MRAIC after their names. Not all members of the RAIC hold accredited degrees in architecture, and not all Canadian architects are members of the RAIC.

Schools of Architecture

* Refer to: List of international architecture schools,
* See: Bachelor of Architecture and Master of Architecture for schools in the United States

Professionals engaged in the design and supervision of construction projects prior to the 20th century were not necessarily trained in a separate architecture program in an academic setting but usually carried the title of "master builder" or "surveyor", after serving a number of years as an apprentice (such as Sir Christopher Wren). The formal study of architecture in academic institutions played a pivotal role in the development of the profession as a whole, serving as a focal point for advances in architectural technology and theory.

The most significant schools in the history of architecture include:

* 1800s — École des Beaux-Arts for the formation of the professional architecture school, and the succession to the École Nationale Supérieure des Beaux-Arts. The École Nationale des Ponts et Chaussées, Paris, the world's first school of engineering.
* late 1800s and early 1900s — MIT the first professional school in the U.S., legitimizing architectural practice as a profession. The Glasgow School of Art, Scotland, for developing Beaux-Arts principles into the age of industry, within the setting of the purpose-built building by Charles Rennie Mackintosh. The Bauhaus (1919–1933), Germany, for combining courses in product design, the arts, building, and craftsmanship.
* 1930s–1950s — Harvard and Illinois Institute of Technology for the introduction of Gropius, Mies and Modernism in the U.S. academic context
* 1960s — UC Berkeley for creation of the 4-year B.Arch. and 2-year M.Arch. system; Architectural Association, London, for the creation of a 'unit master' system, and the influence of teachers belonging to Archigram.
* 1970s — IUAV, 'The School of Venice' [5] for re-evaluating modernism through history and critical theory. The Bartlett School of Architecture, London, for reinventing architecture as 'environmental design'.
* 1980s — Princeton and Cooper Union for pedagogical innovation, Education of an Architect
* 1990s — Columbia and Sci-Arc for the rise of formalism, implementation and influence of new computer tools, and conceptual refocusing.

US rankings

Partial US undergraduate program rankings from the 2006 Design Intelligence(DI) survey. Ranking was based on a survey of 400 firms who evaluated graduates hired during the past five years and considered how prepared they were for “real-world” jobs.

* 1. Rensselaer Polytechnic Institute
* 2. University of Texas at Austin
* 3. California Polytechnic State University San Luis Obispo (tie)
* 3. Rice University (tie)

Complete US graduate program rankings from the 2006 Design Intelligence(DI) survey.

* 1. Harvard University
* 2. University of Cincinnati
* 3. Yale University
* 4. Massachusetts Institute of Technology
* 5. University of Virginia
* 6. Cornell University
* 6. Rice University
* 6. Washington University in St. Louis
* 9. Columbia University
* 10. Virginia Polytechnic Institute and State University
* 11. University of Pennsylvania
* 12. Princeton University
* 12. University of Illinois at Urbana-Champaign
* 12. University of Texas at Austin
* 15. Rhode Island School of Design
* 16. University of Michigan
* 17. Southern California Institute of Architecture
* 17. University of Florida
* 19. Texas A&M University
* 19. University of Notre Dame

Poverty History

Poverty in early modern Europe was not well understood—at least outside of the biblical conception that the poor will always be with us—and the extent of poverty in the centuries leading up to the industrial revolution has not been well mapped—not by historians, and certainly not by the contemporaries who were confronted by the hungry and diseased, the homeless and fatherless, on their doorsteps. Yet there can be no doubt that both the threat and the reality of poverty were pervasive throughout the early modern period.

The material and spiritual needs of the poor were the subject of endless clerical rumination, which sometimes resulted in actual assistance. The needs of the poor likewise merited the extensive practical consideration of urban magistrates and rural nobility alike, whose best interests often dictated that they do something to lessen, or at least justify, the suffering that they saw around them. Poverty generated responses from the poor ranging from quiet acquiescence and submission to the mercy of God to the violent or coercive appropriation of resources, with a host of possibilities in between. One clear marker of the poor was the need to engage in behaviors intended to ward off hunger, cold, nakedness, or other material deprivation. It is not surprising, therefore, that when historians try to count the poor in early modern Europe, they inevitably begin with the lists of those who applied for and received charity: those who professed in the criminal court records that they turned to theft or prostitution out of desperation; those who sought exemption from the payment of taxes and dues; and those caught participating in bread riots, or myriad other activities located firmly in what historians have to come to refer to as "the economy of makeshifts."

Toward a Definition

Any attempt to determine the number of poor in early modern Europe presumes that there exists a clear definition of poverty as well as widely agreed upon indicators of its extent and severity. This is not the case. At the most basic level, the poor were best defined by what they were not. Thus, in early modern Europe poverty could be characterized as the antithetical state either to that of being rich (the most common modern understanding) or that of being powerful (the more typical medieval conception). While power and wealth often travel together, they need not necessarily do so. Certainly the processes of commercialization and urbanization begun in the High Middle Ages and accelerated in the sixteenth and seventeenth centuries, concomitant with the expansion of capitalist economic attitudes and behaviors, worked to increase the importance of money, thereby giving the pecuniary definition of poverty greater cultural resonance over time. But the conflation of the poor with the weak persisted.

This lingering medieval resonance was facilitated in large part by the ongoing influence of the biblical categories of the poor, which consisted especially of widows, orphans, prisoners, and the disabled. All of these groups, which we might now classify as the "structural poor," were likely to suffer from limited resources as well as wielding little political or social power. They are marked by their dependence on others (notably male, be they husbands, fathers, law enforcers, or doctors) for food and shelter, as well as for protection. And it was precisely this dependence that marked them as "deserving"; that is, worthy of receiving the love (caritas) of the community as manifested in material aid. The undeserving poor, by contrast, were believed to be those who were capable of work but who out of laziness or sheer malice refused to earn their own keep. To aid them was not only counterproductive to the health of the economic and social order, it was in fact a sin, and harmful to the soul of both giver and recipient. If the giving of aid indiscriminately was ever practiced in the medieval past (the evidence is mixed), it was certainly no longer tolerated in the early modern period either by intellectuals or bureaucrats.

This is not to say, however, that exceptions were not made to the biblical rule that the able-bodied who do not work do not eat. Other categories of legitimated poor existed alongside those structurally dependent groups identified in the Bible. The most important of these were the voluntary poor, the shamefaced poor, and what we might now refer to as the cyclical poor. The voluntary poor were those individuals, usually acting in the context of well-established organizations or societies, who had renounced material comforts in favor of a life of humiliation following Christ. The most important of these groups were the mendicant monastic orders that came to prominence in the milieu of urban economic prosperity during the High Middle Ages, most notably the Franciscans and the Dominicans. Their renunciation of material possessions was supposed to be so complete that the only way individual friars could survive was to beg for their bread while they traveled about preaching to souls. The mendicant orders were the subject of heated debates about the spiritual legitimacy of their mission and the social impact of their method, both at the time of their establishment and in the context of the Reformation. Nonetheless, they survived, and even flourished in some parts of Catholic Europe following the Tridentine reforms, remaining an important part of the charitable landscape of early modern Europe.

A less contentious exception to the biblical rule that those who do not work do not eat were the socalled shamefaced poor. This group consisted of members of the ancient nobility who had fallen on hard times economically and could no longer afford the style of life that they were expected to maintain. In truly dire cases they could no longer afford even to support themselves at the margin of subsistence. The source of their distress was often a combination of overspending and the concomitant loss of family land, or the declining profitability of its exploitation by tenant farmers or wage laborers. Because the very definition of nobility precluded members of noble families from working their own land or marshaling their remaining resources to a trade or business, their impoverished condition could only be alleviated by the charitable assistance of others. Moreover, such aid had to be dispensed with discretion in order to avoid any further embarrassment being heaped on the families concerned. In a world in which work was expected of all who were physically able, the shamefaced poor make for an odd exception from the modern perspective. For here was a group whose members were denied the opportunity to work on account of their social status and not their physical attributes. But with the exception of England and the Netherlands, which commercialized early (the Netherlands never having had a strong tradition of local nobility anyway), the shamefaced poor remained an important category of those receiving relief in Europe at least until the social disruptions of the French Revolution. And even in the decidedly bourgeois environment of the Dutch Republic, members of the middling classes (such as urban citizens with corporate rights and artisans with guild memberships) in straitened circumstances received more generous and reliable relief than did the very poor, who could not claim such corporate protections. Downward social mobility, regardless of the level at which one started, was something that all European societies tried to protect against, suggesting that poverty was understood at least as much as a relative state as an absolute one.

The cyclical poor were also made worthy of assistance on account of their changing status over time. Two kinds of cases are especially prominent in this regard. The first included those families that were in the early stages of their household life cycle, with (often many) young children to support and limited access to wage-earning labor. Women's work was poorly remunerated at the best of times, and when pregnant and nursing, women's wages could easily drop to zero. In B. Seebohm Rowntree's classic formulation, the most prosperous time in a family's life was following the mother's childbearing years, when at least some of the children were old enough to earn wages but not yet old enough to have begun separate households of their own. The other vulnerable group included those families in which the primary wage earner was temporarily un- or underemployed because of either the natural rhythms of the work year or, increasingly, of the business cycle. Until the development of the electrified factory and all-weather transport, winter was a season of slow work at best, not just in agriculture, but in urban crafts and trades as well. And as increasingly more individuals left farming for industrial and service sector occupations, the impact of trade cycles on employment became more severe. Guilds with cash reserves for emergency support were the primary means of defense against trade cycles, for those lucky enough to enjoy membership. The bread and cast-off clothing distributed during the severest parts of the winter had to suffice for the rest. Neither those with young families nor those with unemployed household heads could count on the unqualified charitable support of their larger communities, however. Then as now, families in such circumstances were subject to the moralistic assessments of those in a position to offer relief. Critics pointed to poor families with many children as evidence that the poor were sexually reckless, a view articulated most famously by the English economist Thomas Malthus (1766–1834). Likewise, the able-bodied unemployed generated a great deal of suspicion about how determinedly they were seeking work and whether they were being too choosy about the type of work they would accept, again not unlike the stigma faced by the unemployed in the modern world.

Poverty and Economic Development

Although, as stated earlier, there are no agreed-upon indicators of the extent and severity of poverty in early modern Europe, many historians have nonetheless felt confident in the belief that a great many Europeans lived either below the poverty line or in imminent danger of dropping below it. This confidence rests in large measure on a commonly shared assumption about the general poverty of all preindustrial economies, in which productivity is low and the probability of risks of all kinds is high. In such an essentially Malthusian world, in which the population constantly threatens to outpace the food supply, the cyclical reappearance of episodes of extreme poverty is guaranteed. Moreover, ways to insure against risk were few or nonexistent. However, despite the attractive logic of the presumption that poverty follows from economic underdevelopment, it suffers from one fundamental inconsistency with the facts: that is, poverty continues to exist in the highly developed, immensely productive, risk-averse, and decidedly non-Malthusian modern first world. Thus the classic narratives about economic development are insufficient for a true understanding of poverty in early modern Europe.

One striking alternative to the view of poverty as solely a consequence of underdevelopment has been offered by the Marxist historians Catharina Lis and Hugo Soly, who argue that economic development has not only failed to eradicate poverty, it has actually increased the likelihood of it. Specifically, they maintain that the incidence of poverty spread as capitalism developed, first as an agricultural system and later as an industrial system, over the course of the early modern period. The key mechanism they see at work behind this process is that of proletarianization, or the increasing separation of workers from the means of production and thus their forced reliance on wages for their maintenance. They begin their narrative with a fairly dire medieval landscape in which "40 to 60 per cent of western European peasants disposed of insufficient land to maintain a family" (p. 15), and then chart from there what they understand to be the processes of further impoverishment over time: the long-term trend toward diminishment in the size of peasant holdings; the development of social policies that criminalized the poor, thereby permitting the better regulation of the labor market (most notably the Elizabethan Poor Law in England, statues against vagrancy and begging in both Catholic and Protestant Europe, and the institution of workhouses in towns both great and small); and most importantly, the massive shift of the labor force away from small independent holdings and craft workshops toward wage labor in commercial agriculture and industry. While they have supporting evidence of the hardship experienced by particular groups of people and sectors of the economy during this time of radical social and economic change, they fail to make a compelling case for an increase in poverty overall. The spread of capitalist enterprises certainly had its losers, but it had its winners as well. Simply documenting the former in great detail does not demonstrate that the scourge of poverty spread between the end of the Middle Ages and the dawn of the modern era.

Where the classic development story clearly neglects questions of distribution, the Marxist story downplays the importance of massive productivity gains in increasing the pool of material resources to be distributed. Both approaches, then, are inadequate to explain both the origins of poverty in the preindustrial past and its persistence in the face of rapid economic development. If we consider only the material facts of the share of food in the average household budget, lengthening life span, energy available per capita to provide light and heat and perform work, and the remarkable growth of consumables in both number and variety, there can be no doubt that poverty, as understood to be strictly a matter of material deprivation, has decreased precipitously over time, with many of the initial gains achieved over the course of the early modern period. However, poverty is also a relative condition, and it may well be the case that the massive increases in the size of the resource pool have had the counterintuitive effect of highlighting distributional inequities in ways that were not as obvious when the material basis of society was so much lower on average.

The experience of early modern Europe also suggests that poverty is a treatable condition, at least to some extent. Those places that experimented seriously with charitable social policies saw genuine improvements in overall well-being. Two notable examples will have to suffice as evidence for our purposes here. The first is the Tudor-Stuart program of food relief in seventeenth-century England, which demonstrably lowered the variance of wheat prices and contributed to lower levels of noncrisis mortality than in either of the periods before or after the policies were in effect. The second is the strong commitment shown by urban magistrates and guild members in the Dutch Republic to provide outdoor relief for those affected by the cyclical harbingers of poverty, as well as institutional care for the aged, the infirm, and the orphaned, facilitating when possible entry or reentry into the middling world of work. Visitors to the Dutch Republic from all over Europe remarked on the ubiquity and generosity of these institutions and their salubrious effect on the body social. In both of these examples, beneficent social policies traveled hand in hand with economic prosperity, probably as both cause and effect.

Friday, April 06, 2007

History

History

History is the study of past human events and activities. Although this broad discipline has often been classified under either the humanities or the social sciences, it can be seen to be a bridge between them, incorporating methodologies from both fields of study. As a field of study, history encompasses many subfields and ancillary fields, including chronology, historiography, genealogy, paleography, and cliometrics. Traditionally, historians have attempted to answer historical questions through the study of written documents, although historical research is not limited merely to these sources. In general, the sources of historical knowledge can be separated into three categories: what is written, what is said, and what is physically preserved, and historians often consult all three. Historians frequently emphasize the importance of written records, which universally date to the development of writing. This emphasis has led to the term prehistory, referring to a time before written sources are available. Since writing emerged at different times throughout the world, the distinction between prehistory and history often depends on the topic.

The scope of the human past has naturally led scholars to divide that time into manageable pieces for study. There are a variety of ways in which the past can be divided, including chronologically, culturally, and topically. These three divisions are not mutually exclusive, and significant overlap is often present, as in "The Argentine Labor Movement in an Age of Transition, 1930–1945". It is possible for historians to concern themselves with both very specific and very general locations, times, and topics, although the trend has been toward specialization. For others history has become a "general" term meaning the study of "everything" that is known about the human past, but even this barrier is being challenged by new fields such as big history. Traditionally, history has been studied with some practical or theoretical aim, but now it is also studied simply out of intellectual curiosity.


History and prehistory

Traditionally, the study of history was limited to the written and spoken word. However, the rise of academic professionalism and the creation of new scientific fields in the 19th and 20th centuries brought a flood of new information that challenged this notion. Archaeology, anthropology and other social sciences were providing new information and even theories about human history. Some traditional historians questioned whether these new studies were really history, since they were not limited to the written word. A new term, prehistory, was coined, to encompass the results of these new fields where they yielded information about times before the existence of written records.

In the 20th century, the division between history and prehistory became problematic. Criticism arose because of history's implicit exclusion of certain civilizations, such as those of Sub-Saharan Africa and pre-Columbian America. Additionally, prehistorians such as Vere Gordon Childe and historical archaeologists like James Deetz began using archaeology to explain important events in areas that were traditionally in the field of history. Historians began looking beyond traditional political history narratives with new approaches such as economic, social and cultural history, all of which relied on various sources of evidence. In recent decades, strict barriers between history and prehistory may be decreasing.

There are differing views for the definition of when history begins. Some believe history began in the 34th century BC, with cuneiform writing. Cuneiforms were written on clay tablets, on which symbols were drawn with a blunt reed called a stylus. The impressions left by the stylus were wedge shaped, thus giving rise to the name cuneiform ("wedge shaped"). The Sumerian script was adapted for the writing of the Akkadian, Elamite, Hittite (and Luwian), Hurrian (and Urartian) languages, and it inspired the Old Persian and Ugaritic national alphabets.[2]

Sources that can give light on the past, such as oral tradition, linguistics, and genetics, have become accepted by many mainstream historians. Nevertheless, archaeologists distinguish between history and prehistory based on the appearance of written documents within the region in question. This distinction remains critical for archaeologists because the availability of a written record generates very different interpretative problems and potentials.


Etymology

The term history entered the English language in 1390 with the meaning of "relation of incidents, story" via the Old French histos, from the Latin historia "narrative, account." This itself was derived from the Ancient Greek ἱστορία, historía, meaning "a learning or knowing by inquiry, history, record, narrative," from the verb ἱστορεῖν, historeîn, "to inquire."

This, in turn, was derived from ἵστωρ, hístōr ("wise man," "witness," or "judge"). Early attestations of ἵστωρ are from the Homeric Hymns, Heraclitus, the Athenian ephebes' oath, and from Boiotic inscriptions (in a legal sense, either "judge" or "witness," or similar). The spirant is problematic, and not present in cognate Greek eídomai ("to appear").

ἵστωρ is ultimately from the Proto-Indo-European *wid-tor-, from the root *weid- ("to know, to see"), also present in the English word wit, the Latin words vision and video, the Sanskrit word veda, and the Slavic word videti and vedati, as well as others. (The asterisk before a word indicates that it is a hypothetical construction, not an attested form.) 'ἱστορία, historía, is an Ionic derivation of the word, which with Ionic science and philosophy were spread first in Classical Greece and ultimately over all of Hellenism.

In Middle English, the meaning was "story" in general. The restriction to the meaning "record of past events" in the sense of Herodotus arises in the late 15th century. In German, French, and indeed, most languages of the world other than English, this distinction was never made, and the same word is used to mean both "history" and "story". A sense of "systematic account" without a reference to time in particular was current in the 16th century, but is now obsolete. The adjective historical is attested from 1561, and historic from 1669. Historian in the sense of a "researcher of history" in a higher sense than that of an annalist or chronicler, who merely record events as they occur, is attested from 1531.

Historiography

The historical method comprises the techniques and guidelines by which historians use primary sources and other evidence to research and then to write history.

The "father of history" has generally been acclaimed as Herodotus of Halicarnassus (484 BC – ca.425 BC).[3] However, it is his contemporary Thucydides (ca. 460 BC – ca. 400 BC) who is credited with having begun the scientific approach to history in his work the History of the Peloponnesian War. Thucydides, unlike Herodutus and other religious historians, regarded history as being the product of the choices and actions of human beings, and looked at cause and effect, rather than as the result of divine intervention.[3] In his historical method, Thucydides emphasized chronology, a neutral point of view, and that the human world was the result of the actions of human beings. Greek historians also viewed history as cyclical, with events regularly reoccurring.

Saint Augustine was influential in Christian and Western thought at the beginning of the Medieval period. Through the Medieval and Renaissance periods, history was often studied through a sacred or religious perspective. Around 1800, German philosopher and historian Georg Wilhelm Friedrich Hegel brought philosophy and a more secular approach in historical study.

In the preface to his book the Muqaddimah, historian and early sociologist Ibn Khaldun warned of seven mistakes that he thought that historians regularly committed. In this criticism, he approached the past as strange and in need of interpretation. The originality of Ibn Khaldun was to claim that the cultural difference of another age must govern the evaluation of relevant historical material, to distinguish the principles according to which it might be possible to attempt the evaluation, and lastly, to feel the need for experience, in addition to rational principles, in order to assess a culture of the past.

Other historians of note who have advanced the historical methods of study include Leopold von Ranke, Lewis Bernstein Namier, Geoffrey Rudolph Elton, G.M. Trevelyan and A.J.P. Taylor. In the 20th century, historians focused less on epic nationalistic narratives, which often tended to glorify the nation or individuals, to more realistic chronologies. French historians introduced quantitative history, using broad data to track the lives of typical individuals, and were prominent in the establishment of cultural history (cf. histoire des mentalités). American historians, motivated by the civil rights era, focused on formerly overlooked ethnic, racial, and socio-economic groups. In recent years, postmodernists have challenged the validity and need for the study of history on the basis that all history is based on the personal interpretation of sources. In his book In Defence of History, Richard J. Evans, a professor of modern history at Cambridge University, defended the worth of history.

http://liberal-arts.blogspot.com/

Tuesday, March 27, 2007

Painting

Painting taken literally is the practice of applying pigment suspended in a carrier (or medium) and a binding agent (a glue) to a surface (support) such as paper, canvas or a wall. However, when used in an artistic sense it means the use of this activity in combination with drawing, composition and other aesthetic considerations in order to manifest the expressive and conceptual intention of the practitioner. Painting is also used to express spiritual motifs and ideas; sites of this kind of painting range from artwork depicting mythological figures on pottery to The Sistine Chapel to the human body itself.

Colour is the essence of painting as sound is of music. Colour is highly subjective, but has observable psychological effects, although these can differ from one culture to the next. Black is associated with mourning in the West, but elsewhere white may be. Some painters, theoreticians, writers and scientists, including Goethe, Kandinsky, Newton, have written their own colour theory. Moreover the use of language is only a generalisation for a colour equivalent. The word "red", for example, can cover a wide range of variations on the pure red of the spectrum. There is not a formalised register of different colours in the way that there is agreement on different notes in music, such as C or C# in music, although the Pantone system is widely used in the printing and design industry for this purpose.

Modern artists have extended the practice of painting considerably to include, for example, collage. This began with Cubism and is not painting in strict sense. Some modern painters incorporate different materials such as sand, cement, straw or wood for their texture. Examples of this are the works of Jean Dubuffet or Anselm Kiefer.

Modern and contemporary art has moved away from the historic value of craft in favour of concept; this has led some to say that painting, as a serious art form, is dead, although this has not deterred the majority of artists from continuing to practise it either as whole or part of their work.

Architecture

Architecture (from Latin, architectura and ultimately from Greek, αρχιτεκτων, "a master builder", from αρχι- "chief, leader" and τεκτων, "builder, carpenter")[3] is the art and science of designing buildings and structures.

A wider definition would include within its scope the design of the total built environment, from the macrolevel of town planning, urban design, and landscape architecture to the microlevel of creating furniture. Architectural design usually must address both feasibility and cost for the builder, as well as function and aesthetics for the user.

In modern usage, architecture is the art and discipline of creating an actual, or inferring an implied or apparent plan of any complex object or system. The term can be used to connote the implied architecture of abstract things such as music or mathematics, the apparent architecture of natural things, such as geological formations or the structure of biological cells, or explicitly planned architectures of human-made things such as software, computers, enterprises, and databases, in addition to buildings. In every usage, an architecture may be seen as a subjective mapping from a human perspective (that of the user in the case of abstract or physical artifacts) to the elements or components of some kind of structure or system, which preserves the relationships among the elements or components.

Planned architecture often manipulates space, volume, texture, light, shadow, or abstract elements in order to achieve pleasing aesthetics. This distinguishes it from applied science or engineering, which usually concentrate more on the functional and feasibility aspects of the design of constructions or structures.

In the field of building architecture, the skills demanded of an architect range from the more complex, such as for a hospital or a stadium, to the apparently simpler, such as planning residential houses. Many architectural works may be seen also as cultural and political symbols, and/or works of art. The role of the architect, though changing, has been central to the successful (and sometimes less than successful) design and implementation of pleasingly built environments in which people live.

Sunday, May 14, 2006

liberal arts

The term liberal arts has come to mean studies that are intended to provide general knowledge and intellectual skills, rather than more specialized occupational or professional skills.
The scope of the liberal arts has changed with society. It once emphasised the education of elites in the classics; but, with the rise of science and humanities during the Age of Enlightenment, the scope and meaning of "liberal arts" expanded to include them. Still excluded from the liberal arts are topics that are specific to particular occupations, such as agriculture, business, dentistry, engineering, medicine, pedagogy (school-teaching), and pharmacy.
In the history of education, the seven liberal arts comprised two groups of studies: the trivium and the quadrivium. Studies in the trivium involved grammar, dialectic (logic), and rhetoric; and studies in the quadrivium involved arithmetic, music, geometry, and astronomy. These liberal arts made up the core curriculum of the medieval universities. The term liberal in liberal arts is from the Latin word liberalis, meaning "appropriate for free men" (social and political elites), and they were contrasted with the servile arts. The liberal arts thus initially represented the kinds of skills and general knowledge needed by the elite echelon of society, whereas the servile arts represented specialized tradesman skills and knowledge needed by persons who were employed by the elite.