Design

3.383219358211 (3802)
Posted by motoman 03/06/2009 @ 16:07

Tags : design, fine arts, gadgets and design, leisure, web design, internet, technology

News headlines
Design museum steps out with shoe show - San Jose Mercury News
That's the thinking behind "Stepping Out," a footwear exhibit at the University of California, Davis, Design Museum that explores the language of soles. "Behind every pair of shoes we can have a story—social impact, religion, culture, economy," says...
Design a Character for XBLA Game Raskulls and Win! - Wired News
Applicants are challenged to design their own Raskull, a skeletal titan capable of holding his own in a world inhabited by knights, ninjas and… milk cartons? Entering is simple. Download the official Raskully Profile template (below) and base your work...
Understanding Apple, Part 2 - PC Magazine
The heart of the strategy: Industrial design and making the Mac the center of a digital lifestyle. The company has developed some important services, such as itunes and mobileme, that augment the way it delivers the Mac's ease-of-use experience to a...
Nature Conservancy Premieres 'Design For A Living World' - Hamptons.com
By Edward Callaghan New York City - Some 500 guests filled the elegant galleries and the garden patio of the Smithsonian's Cooper-Hewitt, National Design Museum - once the home of industrial magnate Andrew Carnegie - for a special preview of The Nature...
Oldfield student wins pillowcase design contest - Newsday
By Danielle Lambert Christen Ardito, an eighth-grade student at Oldfield Middle School in the Harborfields Central School District, recently had her pillowcase design chosen as a winner in a national design competition....
Solutions to the Draft Waxman Bill Expose Design Flaw in US ETS - Huffington Post
The Waxman and Markey Climate Change bill has to be finalized by 25th of May on Memorial Day 2009. The House is considering climate change legislation authored by a key subcommittee chairman, Rep. Ed Markey (D-MA). President Obama has said this is,...
'Camping' As A Defect Of Game Design - Gamasutra
by Lewis Pulsipher on 05/17/09 11:20:00 am I'm always amused when my game design students complain that someone is "camping" in a shooter game. (They play at the game club, not during class!) They're trying to enforce some kind of standard of...
Content: Hoy Chicago Introduces New Design - Portada Magazine
Hoy, the daily Spanish-language publication in Chicago, is launching a new editorial design today that presents news and information in a more engaging, useful and practical way. The redesign offers an enhanced reading flow while incorporating unique...
GPS kids make winning design for veterans campaign - AZ Central.com
It was one of the two winning designs for a fundraising T-shirt that a new national campaign called Home Sweet Home will use to help war veterans avoid foreclosure. On Friday, 7-year-old Sage and sixth-grader Peyton Myers received plaques during an...

Web design

Web Page design requires conceptualizing, planning, modeling, and executing electronic media content and its delivery via the Internet using technologies (such as markup languages) suitable for rendering and presentation by web browsers or other web-based graphical user interfaces (GUIs).

The intent of web design is to create a web site (a collection of electronic files residing on one or more web servers) that presents content (including interactive features or interfaces) to the end user in the form of web pages upon request. Such elements as text, forms, and bit-mapped images (GIFs, JPEGs, PNGs) can be placed on the page using HTML, XHTML, or XML tags. Displaying more complex media (vector graphics, animations, videos, sounds) usually requires browsers to incorporate optional plug-ins, such as Flash, QuickTime, and Java run-time environment. Other plug-ins are embedded in web pages, using HTML or XHTML tags.

Improvements in the various browsers' compliance with W3C standards prompted a widespread acceptance of XHTML and XML in conjunction with Cascading Style Sheets (CSS) to position and manipulate web page elements. The latest W3C standards and proposals aim to deliver a wide variety of media and accessibility options to the client without employing plug-ins.

Typically web pages are classified as static or dynamic.

With growing specialization within communication design and information technology fields, there is a strong tendency to draw a clear line between web design specifically for web pages and web development for the overall logistics of all web-based services.

Tim Berners-Lee published what is considered to be the first website in August 1991. Berners-Lee was the first to combine Internet communication (which had been carrying email and the Usenet for decades) with hypertext (which had also been around for decades, but limited to browsing information stored on a single computer, such as interactive CD-ROM design). Websites are written in a markup language called HTML, and early versions of HTML were very basic, only giving a website's basic structure (headings and paragraphs), and the ability to link using hypertext. This was new and different from existing forms of communication - users could easily navigate to other pages by following hyperlinks from page to page.

As the Web and web design progressed, the markup language changed to become more complex and flexible, giving the ability to add objects like images and tables to a page. Features like tables, which were originally intended to be used to display tabular information, were soon subverted for use as invisible layout devices. With the advent of Cascading Style Sheets (CSS), table-based layout is commonly regarded as outdated. Database integration technologies such as server-side scripting and design standards like W3C further changed and enhanced the way the Web is made. As times change, websites are changing the code on the inside and visual design on the outside with ever-evolving programs and utilities.

With the progression of the Web, tens of thousands of web design companies have been established around the world to serve the growing demand for such work. As with much of the information technology industry, many web design companies have been established in technology parks in the developing world as well as many Western design companies setting up offices in countries such as India, Romania, and Russia to take advantage of the relatively lower labor rates found in such countries.

A web site is a collection of information about a particular topic or subject. Designing a web site is defined as the arrangement and creation of web pages that in turn make up a web site. A web page consists of information for which the web site is developed. A web site might be compared to a book, where each page of the book is a web page.

A web site typically consists of text and images. The first page of a web site is known as the Home page or Index. Some web sites use what is commonly called a Splash Page. Splash pages might include a welcome message, language or region selection, or disclaimer. Each web page within a web site is an HTML file which has its own URL. After each web page is created, they are typically linked together using a navigation menu composed of hyperlinks. Faster browsing speeds have led to shorter attention spans and more demanding online visitors and this has resulted in less use of Splash Pages, particularly where commercial web sites are concerned.

Once a web site is completed, it must be published or uploaded in order to be viewable to the public over the internet. This may be done using an FTP client. Once published, the web master may use a variety of techniques to increase the traffic, or hits, that the web site receives. This may include submitting the web site to a search engine such as Google or Yahoo, exchanging links with other web sites, creating affiliations with similar web sites, etc.

Web site design crosses multiple disciplines of information systems, information technology and communication design. The web site is an information system whose components are sometimes classified as front-end and back-end. The observable content (e.g. page layout, user interface, graphics, text, audio) is known as the front-end. The back-end comprises the organization and efficiency of the source code, invisible scripted functions, and the server-side components that process the output from the front-end. Depending on the size of a Web development project, it may be carried out by a multi-skilled individual (sometimes called a web master), or a project manager may oversee collaborative design between group members with specialized skills .

As in collaborative designs, there are conflicts between differing goals and methods of web site designs. These are a few of the ongoing ones.

In the early stages of the web, there wasn't as much collaboration between web designs and larger advertising campaigns, customer transactions, social networking, intranets and extranets as there is now. Web pages were mainly static online brochures disconnected from the larger projects.

Many web pages are still disconnected from larger projects. Special design considerations are necessary for use within these larger projects. These design considerations are often overlooked, especially in cases where there is a lack of leadership, lack of understanding of why and technical knowledge of how to integrate, or lack of concern for the larger project in order to facilitate collaboration. This often results in unhealthy competition or compromise between departments, and less than optimal use of web pages.

On the web the designer has no control over several factors, including the size of the browser window, the web browser used, the input devices used (mouse, touch screen, voice command, text, cell phone number pad, etc.) and the size, design, and other characteristics of the fonts users have available (installed) on their own computers.

Some designers choose to control the appearance of the elements on the screen by using specific width designations. This control may be achieved in HTML through the use of (now disparaged) table-based design or more modern (and standard) div-based design, usually enhanced (and made more flexible) with CSS. When the text, images, and layout do not vary among browsers, this is referred to as fixed-width design. Advocates of fixed-width design argue for the designers' precise control over the layout of a site and the placement of objects within pages.

Other designers choose a more liquid approach, one which arranges content flexibly on users' screens, responding to the size of their browsers' windows. For better or worse, they concede to users more control over the rendition of their work. Proponents of liquid design prefer greater compatibility with users' various choices of presentation and more efficient use of the screen space available. Liquid design can be achieved by setting the width of text blocks and page modules to a percentage of the page, or by avoiding specifying the width for these elements altogether, allowing them to expand or contract naturally in accordance with the width of the browser. It is more in keeping with the original concept of HTML, that it should specify, not the appearance of text, but its contextual function, leaving the rendition to be decided by users' various display devices.

Web page designers (of both types) must consider how their pages will appear on various screen resolutions. Sometimes the most pragmatic choice is to allow text width to vary between minimum and maximum values. This allows designers to avoid considering rare users' equipment while still taking good advantage of available screen space.

Similar to liquid layout is the optional fit to window feature with Adobe Flash content. This is a fixed layout that optimally scales the content of the page without changing the arrangement or text wrapping when the browser is resized.

Adobe Flash (formerly Macromedia Flash) is a proprietary, robust graphics animation or application development program used to create and deliver dynamic content, media (such as sound and video), and interactive applications over the web via the browser.

Many graphic artists use Flash because it gives them exact control over every part of the design, and anything can be animated and generally "jazzed up". Some application designers enjoy Flash because it lets them create applications that do not have to be refreshed or go to a new web page every time an action occurs. Flash can use embedded fonts instead of the standard fonts installed on most computers. There are many sites which forgo HTML entirely for Flash. Other sites may use Flash content combined with HTML as conservatively as gifs or jpegs would be used, but with smaller vector file sizes and the option of faster loading animations. Flash may also be used to protect content from unauthorized duplication or searching. Alternatively, small, dynamic Flash objects may be used to replace standard HTML elements (such as headers or menu links) with advanced typography not possible via regular HTML or CSS (see Scalable Inman Flash Replacement).

Flash is not a standard produced by a vendor-neutral standards organization like most of the core protocols and formats on the Internet. Flash is much more self-contained than the open HTML format as it does not integrate with web browser UI features. For example: the browsers "Back" button couldn't be used to go to a previous screen in the same Flash file, but instead a previous HTML page with a different Flash file. The browsers "Reload" button wouldn't reset just a portion of a Flash file, but instead would restart the entire Flash file as loaded when the HTML page was entered, similar to any online video. Such features would instead be included in the interface of the Flash file if needed.

Flash requires a proprietary media-playing plugin to be seen. According to a study, 98% of US Web users have the Flash Player installed. The percentage has remained fairly constant over the years; for example, a study conducted by NPD Research in 2002 showed that 97.8% of US Web users had the Flash player installed. Numbers vary depending on the detection scheme and research demographics.

Flash detractors claim that Flash websites tend to be poorly designed, and often use confusing and non-standard user-interfaces, such as the inability to scale according to the size of the web browser, or its incompatibility with common browser features such as the back button. Up until recently, search engines have been unable to index Flash objects, which has prevented sites from having their contents easily found. This is because many search engine crawlers rely on text to index websites. It is possible to specify alternate content to be displayed for browsers that do not support Flash. Using alternate content will help search engines to understand the page, and can result in much better visibility for the page. However, the vast majority of Flash websites are not disability accessible (for screen readers, for example) or Section 508 compliant. An additional issue is that sites which commonly use alternate content for search engines to their human visitors are usually judged to be spamming search engines and are automatically banned.

The most recent incarnation of Flash's scripting language (called "ActionScript", which is an ECMA language similar to JavaScript) incorporates long-awaited usability features, such as respecting the browser's font size and allowing blind users to use screen readers. Actionscript 2.0 is an Object-Oriented language, allowing the use of CSS, XML, and the design of class-based web applications.

When Netscape Navigator 4 dominated the browser market, the popular solution available for designers to lay out a Web page was by using tables. Often even simple designs for a page would require dozens of tables nested in each other. Many web templates in Dreamweaver and other WYSIWYG editors still use this technique today. Navigator 4 didn't support CSS to a useful degree, so it simply wasn't used.

After the browser wars subsided, and the dominant browsers such as Internet Explorer became more W3C compliant, designers started turning toward CSS as an alternate means of laying out their pages. CSS proponents say that tables should be used only for tabular data, not for layout. Using CSS instead of tables also returns HTML to a semantic markup, which helps bots and search engines understand what's going on in a web page. All modern Web browsers support CSS with different degrees of limitations.

However, one of the main points against CSS is that by relying on it exclusively, control is essentially relinquished as each browser has its own quirks which result in a slightly different page display. This is especially a problem as not every browser supports the same subset of CSS rules. For designers who are used to table-based layouts, developing Web sites in CSS often becomes a matter of trying to replicate what can be done with tables, leading some to find CSS design rather cumbersome due to lack of familiarity. For example, at one time it was rather difficult to produce certain design elements, such as vertical positioning, and full-length footers in a design using absolute positions. With the abundance of CSS resources available online today, though, designing with reasonable adherence to standards involves little more than applying CSS 2.1 or CSS 3 to properly structured markup.

These days most modern browsers have solved most of these quirks in CSS rendering and this has made many different CSS layouts possible. However, some people continue to use old browsers, and designers need to keep this in mind, and allow for graceful degrading of pages in older browsers. Most notable among these old browsers are Internet Explorer 5 and 5.5, which, according to some web designers, are becoming the new Netscape Navigator 4 — a block that holds the World Wide Web back from converting to CSS design. However, the W3 Consortium has made CSS in combination with XHTML the standard for web design.

Some web developers have a graphic arts background and may pay more attention to how a page looks than considering other issues such as how visitors are going to find the page via a search engine. Some might rely more on advertising than search engines to attract visitors to the site. On the other side of the issue, search engine optimization consultants (SEOs) are concerned with how well a web site works technically and textually: how much traffic it generates via search engines, and how many sales it makes, assuming looks don't contribute to the sales. As a result, the designers and SEOs often end up in disputes where the designer wants more 'pretty' graphics, and the SEO wants lots of 'ugly' keyword-rich text, bullet lists, and text links. One could argue that this is a false dichotomy due to the possibility that a web design may integrate the two disciplines for a collaborative and synergistic solution. Because some graphics serve communication purposes in addition to aesthetics, how well a site works may depend on the graphic designer's visual communication ideas as well as the SEO considerations.

Another problem when using a lot of graphics on a page is that download times can be greatly lengthened, often irritating the user. This has become less of a problem as the internet has evolved with high-speed internet and the use of vector graphics. This is an engineering challenge to increase bandwidth in addition to an artistic challenge to minimize graphics and graphic file sizes. This is an on-going challenge as increased bandwidth invites increased amounts of content.

However, W3C permits an exception where tables for layout either make sense when linearized or an alternate version (perhaps linearized) is made available.

Website accessibility is also changing as it is impacted by Content Management Systems that allow changes to be made to webpages without the need of obtaining programming language knowledge.

Before creating and uploading a website, it is important to take the time to plan exactly what is needed in the website. Thoroughly considering the audience or target market, as well as defining the purpose and deciding what content will be developed are extremely important.

It is essential to define the purpose of the website as one of the first steps in the planning process. A purpose statement should show focus based on what the website will accomplish and what the users will get from it. A clearly defined purpose will help the rest of the planning process as the audience is identified and the content of the site is developed. Setting short and long term goals for the website will help make the purpose clear and plan for the future when expansion, modification, and improvement will take place.Goal-setting practices and measurable objectives should be identified to track the progress of the site and determine success.

Taking into account the characteristics of the audience will allow an effective website to be created that will deliver the desired content to the target audience.

Content evaluation and organization requires that the purpose of the website be clearly defined. Collecting a list of the necessary content then organizing it according to the audience's needs is a key step in website planning. In the process of gathering the content being offered, any items that do not support the defined purpose or accomplish target audience objectives should be removed. It is a good idea to test the content and purpose on a focus group and compare the offerings to the audience needs. The next step is to organize the basic information structure by categorizing the content and organizing it according to user needs. Each category should be named with a concise and descriptive title that will become a link on the website. Planning for the site's content ensures that the wants or needs of the target audience and the purpose of the site will be fulfilled.

Because of the market share of modern browsers (depending on your target market), the compatibility of your website with the viewers is restricted. For instance, a website that is designed for the majority of websurfers will be limited to the use of valid XHTML 1.0 Strict or older, Cascading Style Sheets Level 1, and 1024x768 display resolution. This is because Internet Explorer is not fully W3C standards compliant with the modularity of XHTML 1.1 and the majority of CSS beyond 1. A target market of more alternative browser (e.g. Firefox, Safari and Opera) users allow for more W3C compliance and thus a greater range of options for a web designer.

Another restriction on webpage design is the use of different Image file formats. The majority of users can support GIF, JPEG, and PNG (with restrictions). Again Internet Explorer is the major restriction here, not fully supporting PNG's advanced transparency features, resulting in the GIF format still being the most widely used graphic file format for transparent images.

Many website incompatibilities go unnoticed by the designer and unreported by the users. The only way to be certain a website will work on a particular platform is to test it on that platform.

Documentation is used to visually plan the site while taking into account the purpose, audience and content, to design the site structure, content and interactions that are most suitable for the website. Documentation may be considered a prototype for the website – a model which allows the website layout to be reviewed, resulting in suggested changes, improvements and/or enhancements. This review process increases the likelihood of success of the website.

In addition to planning the structure, the layout and interface of individual pages may be planned using a storyboard. In the process of storyboarding, a record is made of the description, purpose and title of each page in the site, and they are linked together according to the most effective and logical diagram type. Depending on the number of pages required for the website, documentation methods may include using pieces of paper and drawing lines to connect them, or creating the storyboard using computer software.

Some or all of the individual pages may be designed in greater detail as a website wireframe, a mock up model or comprehensive layout of what the page will actually look like. This is often done in a graphic program, or layout design program. The wireframe has no working functionality, only planning, though it can be used for selling ideas to other web design companies.

To the top



Graphic design

A Boeing 747 Air Force One aircraft. The cyan blue form, the US flag, presidential seal and the lettering were all  designed at different times and combined in this one final design. Graphic design is applied in virtually every organization or society. There are virtually no limits to the size and applications of graphic design.

The term graphic design can refer to a number of artistic and professional disciplines which focus on visual communication and presentation. Various methods are used to create and combine symbols, images and/or words to create a visual representation of ideas and messages. A graphic designer may use typography, visual arts and page layout techniques to produce the final result. Graphic design often refers to both the process (designing) by which the communication is created and the products (designs) which are generated.

Common uses of graphic design include magazines, advertisements, product packaging and web design. For example, a product package might include a logo or other artwork, organized text and pure design elements such as shapes and color which unify the piece. Composition is one of the most important features of graphic design especially when using pre-existing materials or diverse elements.

During the Tang dynasty (618–906) between the 4th and 7th century A.D. wood blocks were cut to print on textiles and later to reproduce Buddhist texts. A Buddhist scripture printed in 868 is the earliest known printed book. Beginning in the 11th century, longer scrolls and books were produced using movable type printing making books widely available during the Song dyanasty (960–1279). Sometime around 1450, Johann Gutenberg's printing press made books widely available in Europe. The book design of Aldus Manutius developed the book structure which would become the foundation of western publication design. This era of graphic design is called Humanist or Old Style.

In late 19th century Europe, especially in the United Kingdom, the movement began to separate graphic design from fine art. Piet Mondrian is known as the father of graphic design. He was a fine artist, but his use of grids inspired the modern grid system used today in advertising, print and web layout.

In 1849, Henry Cole became one of the major forces in design education in Great Britain, informing the government of the importance of design in his Journal of Design and Manufactures. He organized the Great Exhibition as a celebration of modern industrial technology and Victorian design.

From 1892 to 1896 William Morris' Kelmscott Press published books that are some of the most significant of the graphic design products of the Arts and Crafts movement, and made a very lucrative business of creating books of great stylistic refinement and selling them to the wealthy for a premium. Morris proved that a market existed for works of graphic design in their own right and helped pioneer the separation of design from production and from fine art. The work of the Kelmscott Press is characterized by its obsession with historical styles. This historicism was, however, important as it amounted to the first significant reaction to the stale state of nineteenth-century graphic design. Morris' work, along with the rest of the Private Press movement, directly influenced Art Nouveau and is indirectly responsible for developments in early twentieth century graphic design in general.

The signage in the London Underground is a classic of the modern era and used a font designed by Edward Johnston in 1916.

In the 1920s, Soviet constructivism applied 'intellectual production' in different spheres of production. The movement saw individualistic art as useless in revolutionary Russia and thus moved towards creating objects for utilitarian purposes. They designed buildings, theater sets, posters, fabrics, clothing, furniture, logos, menus, etc.

Jan Tschichold codified the principles of modern typography in his 1928 book, New Typography. He later repudiated the philosophy he espoused in this book as being fascistic, but it remained very influential. Tschichold, Bauhaus typographers such as Herbert Bayer and Laszlo Moholy-Nagy, and El Lissitzky are the fathers of graphic design as we know it today. They pioneered production techniques and stylistic devices used throughout the twentieth century. The following years saw graphic design in the modern style gain widespread acceptance and application. A booming post-World War II American economy established a greater need for graphic design, mainly advertising and packaging. The emigration of the German Bauhaus school of design to Chicago in 1937 brought a "mass-produced" minimalism to America; sparking a wild fire of "modern" architecture and design. Notable names in mid-century modern design include Adrian Frutiger, designer of the typefaces Univers and Frutiger; Paul Rand, who, from the late 1930s until his death in 1996, took the principles of the Bauhaus and applied them to popular advertising and logo design, helping to create a uniquely American approach to European minimalism while becoming one of the principal pioneers of the subset of graphic design known as corporate identity; and Josef Müller-Brockmann, who designed posters in a severe yet accessible manner typical of the 1950s and 1960s era.

From road signs to technical schematics, from interoffice memorandums to reference manuals, graphic design enhances transfer of knowledge. Readability is enhanced by improving the visual presentation of text.

Design can also aid in selling a product or idea through effective visual communication. It is applied to products and elements of company identity like logos, colors, and text. Together these are defined as branding. See advertising. Branding has increasingly become important in the range of services offered by many graphic designers, alongside corporate identity, and the terms are often used interchangeably.

Textbooks are designed to present subjects such as geography, science, and math. These publications have layouts which illustrate theories and diagrams. A common example of graphics in use to educate is diagrams of human anatomy. Graphic design is also applied to layout and formatting of educational material to make the information more accessible and more readily understandable.

Graphic design is applied in the entertainment industry in decoration, scenery, and visual story telling. Other examples of design for entertainment purposes include novels, comic books, opening credits and closing credits in film, and programs and props on stage. This could also include artwork used for t-shirts and other items screenprinted for sale.

From scientific journals to news reporting, the presentation of opinion and facts is often improved with graphics and thoughtful compositions of visual information - known as information design. Newspapers, magazines, blogs, television and film documentaries may use graphic design to inform and entertain. With the advent of the web, information designers with experience in interactive tools such as Adobe Flash are increasingly being used to illustrate the background to news stories.

A graphic design project may involve the stylization and presentation of existing text and either preexisting imagery or images developed by the graphic designer. For example, a newspaper story begins with the journalists and photojournalists and then becomes the graphic designer's job to organize the page into a reasonable layout and determine if any other graphic elements should be required. In a magazine article or advertisement, often the graphic designer or art director will commission photographers or illustrators to create original pieces just to be incorporated into the design layout. Contemporary design practice has been extended to the modern computer, for example in the use of WYSIWYG user interfaces, often referred to as interactive design, or multimedia design.

Before any graphic elements may be applied to a design, the graphic elements must be originated by means of visual art skills. These graphics are often (but not always) developed by a graphic designer. Visual arts include works which are primarily visual in nature using anything from traditional media, to photography or computer generated art. Graphic design principles may be applied to each graphic art element individually as well as to the final composition.

Typography is the art, craft and techniques of type design, modifying type glyphs, and arranging type. Type glyphs (characters) are created and modified using a variety of illustration techniques. The arrangement of type is the selection of typefaces, point size, line length, leading (line spacing) and letter spacing.

Typography is performed by typesetters, compositors, typographers, graphic artists, art directors, and clerical workers. Until the Digital Age, typography was a specialized occupation. Digitization opened up typography to new generations of visual designers and lay users.

Page layout is the part of graphic design that deals in the arrangement and style treatment of elements (content) on a page. Beginning from early illuminated pages in hand-copied books of the Middle Ages and proceeding down to intricate modern magazine and catalog layouts, proper page design has long been a consideration in printed material. With print media, elements usually consist of type (text), images (pictures), and occasionally place-holder graphics for elements that are not printed with ink such as die/laser cutting, foil stamping or blind embossing.

Graphic designers are often involved in interface design, such as web design and software design when end user interactivity is a design consideration of the layout or interface. Combining visual communication skills with the interactive communication skills of user interaction and online branding, graphic designers often work with software developers and web developers to create both the look and feel of a web site or software application and enhance the interactive experience of the user or web site visitor.

Printmaking is the process of making artworks by printing, normally on paper. Except in the case of monotyping, the process is capable of producing multiples of the same piece, which is called a print. Each piece is not a copy but an original since it is not a reproduction of another work of art and is technically known as an impression. Painting or drawing, on the other hand, create a unique original piece of artwork. Prints are created from a single original surface, known technically as a matrix. Common types of matrices include: plates of metal, usually copper or zinc for engraving or etching; stone, used for lithography; blocks of wood for woodcuts, linoleum for linocuts and fabric plates for screen-printing. But there are many other kinds, discussed below. Works printed from a single plate create an edition, in modern times usually each signed and numbered to form a limited edition. Prints may also be published in book form, as artist's books. A single print could be the product of one or multiple techniques.

Chromatics is the field of how eyes perceive color and how to explain and organize those colors in the printer and on the monitor. The Retina in the eye is covered by two light-sensitive receptors that are named rods and cones. Rods are sensitive to light, but not sensitive to color. Cones are the opposite of rods. They are less sensitive to light, but color can be perceived.

Critical, observational, quantitative and analytic thinking are required for design layouts and rendering. If the executor is merely following a sketch, script or instructions (as may be supplied by an art director) they are not usually considered the author. The layout is produced using external traditional or digital image editing tools. Selecting the appropriate tools for each project is critical in how the project will be perceived by its audience.

In the mid 1980s, the arrival of desktop publishing and graphic art software applications introduced a generation of designers to computer image manipulation and creation that had previously been manually executed. Computer graphic design enabled designers to instantly see the effects of layout or typographic, and to simulate the effects of traditional media without requiring a lot of space. However, traditional tools such as pencils or markers are often used to develop ideas even when computers are used for finalization. Indeed, a designer or art director may well hand sketch numerous concepts as part of the creative process. Indeed, some of these sketches may even be shown to a client for early stage approval, before moving on to develop the idea further using a computer and graphic design software tools.

Computers are generally considered to be an indispensable tool used in the graphic design industry. Computers and software applications are generally seen, by creative professionals, as more effective production tools than traditional methods. However, some designers continue to use manual and traditional tools for production, such as Milton Glaser.

New ideas can come by way of experimenting with tools and methods. Some designers explore ideas using pencil and paper to avoid creating within the limits of whatever computer fonts, clipart, stock photos, or rendering filters (e.g. Kai's Power Tools) are available on any particular configuration. Others use many different mark-making tools and resources from computers to sticks and mud as a means of inspiring creativity. One of the key features of graphic design is that it makes a tool out of appropriate image selection in order to convey meaning.

There is some debate whether computers enhance the creative process of graphic design. Rapid production from the computer allows many designers to explore multiple ideas quickly with more detail than what could be achieved by traditional hand-rendering or paste-up on paper, moving the designer through the creative process more quickly. However, being faced with limitless choices does not help isolate the best design solution and can lead to designers endlessly iterating without a clear design outcome.

A graphic designer may use sketches to explore multiple or complex ideas quickly without the potential distractions of technical difficulties from software malfunctions or learning the software. Hand rendered comps are often used to get approval of an idea execution before investing time to produce finished visuals on a computer or in paste-up. The same thumbnail sketches or rough drafts on paper may be used to rapidly refine and produce the idea on the computer in a hybrid process. This hybrid process is especially useful in logo design where a software learning curve may detract from a creative thought process. The traditional-design/computer-production hybrid process may be used for freeing one's creativity in page layout or image development as well. Traditional graphic designers may employ computer-savvy production artists to produce their ideas from sketches, without needing to learn the computer skills themselves. However, this practice is less utilized since the advent of desktop publishing and its integration with most graphic design courses.

Graphic design career paths cover all ends of the creative spectrum and often overlap. They can also vary depending on the industry focus of a particular design organisation. The main responsibilities (not necessarily titles) include graphic designer, art director, creative director, and the entry level production artist. Depending on the industry served, the responsibilities may have different titles such as "DTP Associate" or "Graphic Artist," but the graphic design principles usually remain consistent. The responsibilities may come from or lead to specialized skills such as illustration, photography or interactive design. A graphic designer reports to the art director, creative director, senior media creative or chief creative director. As a designer becomes more senior, they may spend less time on hands-on layout and more time dealing with larger focus creative activity, such as brand development and corporate identity development. Moreover, as graphic designers become more senior, they are often expected to interact more directly with clients.

To the top



Nuclear weapon design

The first nuclear weapons, though large, cumbersome and inefficient, provided the basic design building blocks of all future weapons. Here the Gadget device is prepared for the first nuclear test: Trinity.

Nuclear weapon designs are physical, chemical, and engineering arrangements that cause the physics package of a nuclear weapon to detonate. There are three basic design types. In all three, the explosive energy is derived primarily from nuclear fission, not fusion.

Pure fission weapons historically have been the first type to be built by a nation state. Large industrial states with well-developed nuclear arsenals have two-stage thermonuclear weapons, which are the most compact, scalable, and cost effective option once the necessary industrial infrastructure is built.

All innovations in nuclear weapon design originated in the United States, although some were later developed independently by other states; the following descriptions feature U.S. designs.

In early news accounts, pure fission weapons were called atomic bombs or A-bombs, a misnomer since the energy comes only from the nucleus of the atom. Weapons involving fusion were called hydrogen bombs or H-bombs, also a misnomer since their destructive energy comes mostly from fission. Insiders favored the terms nuclear and thermonuclear, respectively.

The term thermonuclear refers to the high temperatures required to initiate fusion. It ignores the equally important factor of pressure, which was considered secret at the time the term became current. Many nuclear weapon terms are similarly inaccurate because of their origin in a classified environment. Some are nonsense code words such as "alarm clock" (see below).

Nuclear fission splits the heaviest of atoms to form lighter atoms. Nuclear fusion bonds together the lightest atoms to form heavier atoms. Both reactions generate roughly a million times more energy than comparable chemical reactions, making nuclear bombs a million times more powerful than non-nuclear bombs, which a French patent revendicated in May 1939.

In some ways, fission and fusion are opposite and complementary reactions, but the particulars are unique for each. To understand how nuclear weapons are designed, it is useful to know the important similarities and differences between fission and fusion. The following explanation uses rounded numbers and approximations.

When a free neutron hits the nucleus of a fissionable atom like uranium-235 ( 235U), the uranium splits into two smaller atoms called fission fragments, plus more neutrons. Fission can be self-sustaining because it produces more neutrons of the speed required to cause new fissions.

The immediate energy release per atom is 180 million electron volts (MeV), i.e. 74 TJ/kg, of which 90% is kinetic energy (or motion) of the fission fragments, flying away from each other mutually repelled by the positive charge of their protons (38 for strontium, 54 for xenon). Thus their initial kinetic energy is 67 TJ/kg, hence their initial speed is 12,000 kilometers per second, but their high electric charge causes many inelastic collisions with nearby nuclei. The fragments remain trapped inside the bomb's uranium pit until their motion is converted into x-ray heat, a process which takes about a millionth of a second (a microsecond).

This x-ray energy produces the blast and fire which are normally the purpose of a nuclear explosion.

After the fission products slow down, they remain radioactive. Being new elements with too many neutrons, they eventually become stable by means of beta decay, converting neutrons into protons by throwing off electrons and gamma rays. Each fission product nucleus decays between one and six times, average three times, producing radioactive elements with half-lives up to 200,000 years. In reactors, these products are the nuclear waste in spent fuel. In bombs, they become radioactive fallout, both local and global.

Meanwhile, inside the exploding bomb, the free neutrons released by fission strike nearby U-235 nuclei causing them to fission in an exponentially growing chain reaction (1, 2, 4, 8, 16, etc.). Starting from one, the number of fissions can theoretically double a hundred times in a microsecond, which could consume all uranium up to hundreds of tons by the hundredth link in the chain. In practice, bombs do not contain that much uranium, and, anyway, just a few kilograms undergo fission before the uranium blows itself apart.

Holding an exploding bomb together is the greatest challenge of fission weapon design. The heat of fission rapidly expands the uranium pit, spreading apart the target nuclei and making space for the neutrons to escape without being captured. The chain reaction stops.

Materials which can sustain a chain reaction are called fissile. The two fissile materials used in nuclear weapons are: U-235, also known as highly enriched uranium (HEU), oralloy (Oy) meaning Oak Ridge Alloy, or 25 (the last digits of the atomic number, which is 92 for uranium, and the atomic weight, here 235, respectively); and Pu-239, also known as plutonium, or 49 (from 94 and 239).

Uranium's most common isotope, U-238, is fissionable but not fissile (meaning that it cannot sustain a chain reaction by itself but can be made to fission, specifically by neutrons from a fusion reaction). Its aliases include natural or unenriched uranium, depleted uranium (DU), tubealloy (Tu), and 28. It cannot sustain a chain reaction, because its own fission neutrons are not powerful enough to cause more U-238 fission. However, the neutrons released by fusion will fission U-238. This reaction produces most of the energy in a typical two-stage thermonuclear weapon.

Notice that the total energy output, 17.6 MeV, is one tenth of that with fission, but the ingredients are only one-fiftieth as massive, so the energy output per kilo is greater. However, in this fusion reaction 80% of the energy, or 14 MeV, is in the motion of the neutron which, having no electric charge and being almost as massive as the hydrogen nuclei that created it, can escape the scene without leaving its energy behind to help sustain the reaction – or to generate x-rays for blast and fire.

The only practical way to capture most of the fusion energy is to trap the neutrons inside a massive bottle of heavy material such as lead, uranium, or plutonium. If the 14 MeV neutron is captured by uranium (either type: 235 or 238) or plutonium, the result is fission and the release of 180 MeV of fission energy, which will produce the heat and pressure necessary to sustain fusion, in addition to multiplying the energy output tenfold.

Fission is thus necessary to start fusion, to sustain fusion, and to optimize the extraction of useful energy from fusion (by making more fission). In the case of a neutron bomb, see below, the last-mentioned does not apply since the escape of neutrons is the objective.

A nuclear reactor is necessary to provide the neutrons. The industrial-scale conversion of lithium-6 to tritium is very similar to the conversion of uranium-238 into plutonium-239. In both cases the feed material is placed inside a nuclear reactor and removed for processing after a period of time. In the 1950s, when reactor capacity was limited, for the production of every atom of tritium the production of an atom of plutonium had to be dispensed with.

The fission of one plutonium atom releases ten times more total energy than the fusion of one tritium atom, and it generates fifty times more blast and fire. For this reason, tritium is included in nuclear weapon components only when it causes more fission than its production sacrifices, namely in the case of fusion-boosted fission.

However, an exploding nuclear bomb is a nuclear reactor. The above reaction can take place simultaneously throughout the secondary of a two-stage thermonuclear weapon, producing tritium in place as the device explodes.

Of the three basic types of nuclear weapon, the first, pure fission, uses the first of the three nuclear reactions above. The second, fusion-boosted fission, uses the first two. The third, two-stage thermonuclear, uses all three.

The first task of a nuclear weapon design is to rapidly assemble, at the time of detonation, more than one critical mass of fissile uranium or plutonium. A critical mass is one in which the percentage of fission-produced neutrons which are captured and cause more fission is large enough to perpetuate the fission and prevent it from dying out.

Once the critical mass is assembled, at maximum density, a burst of neutrons is supplied to start as many chain reactions as possible. Early weapons used an "urchin" inside the pit containing non-touching interior surfaces of polonium-210 and beryllium. Implosion of the pit crushed the urchin, bringing the two metals in contact to produce free neutrons. In modern weapons, the neutron generator is a high-voltage vacuum tube containing a particle accelerator which bombards a deuterium/tritium-metal hydride target with deuterium and tritium ions. The resulting small-scale fusion produces neutrons at a protected location outside the physics package, from which they penetrate the pit. This method allows better control of the timing of chain reaction initiation.

The critical mass of an uncompressed sphere of bare metal is 110 lb (50 kg) for uranium-235 and 35 lb (16 kg) for delta-phase plutonium-239. In practical applications, the amount of material required for critical mass is modified by shape, purity, density, and the proximity to neutron-reflecting material, all of which affect the escape or capture of neutrons.

To avoid a chain reaction during handling, the fissile material in the weapon must be sub-critical before detonation. It may consist of one or more components containing less than one uncompressed critical mass each. A thin hollow shell can have more than the bare-sphere critical mass, as can a cylinder, which can be arbitrarily long without ever reaching critical mass.

A tamper is an optional layer of dense material surrounding the fissile material. Due to its inertia it delays the expansion of the reacting material, increasing the efficiency of the weapon. Often the same layer serves both as tamper and as neutron reflector.

Little Boy, the Hiroshima bomb, used 141 lb (64 kg) of uranium with an average enrichment of around 80%, or 112 lb (51 kg) of U-235, just about the bare-metal critical mass. (See Little Boy article for a detailed drawing.) When assembled inside its tamper/reflector of tungsten carbide, the 141 lb (64 kg) was more than twice critical mass. Before the detonation, the uranium-235 was formed into two sub-critical pieces, one of which was later fired down a gun barrel to join the other, starting the atomic explosion. About 1% of the uranium underwent fission; the remainder, representing most of the entire wartime output of the giant factories at Oak Ridge, scattered uselessly.

The inefficiency was caused by the speed with which the uncompressed fissioning uranium expanded and became sub-critical by virtue of decreased density. Despite its inefficiency, this design, because of its shape, was adapted for use in small-diameter, cylindrical artillery shells (a gun-type warhead fired from the barrel of a much larger gun). Such warheads were deployed by the U.S. until 1992, accounting for a significant fraction of the U-235 in the arsenal, and were some of the first weapons dismantled to comply with treaties limiting warhead numbers. The rationale for this decision was undoubtably a combination of the lower yield and grave safety issues associated with the gun-type design.

Fat Man, the Nagasaki bomb, used 13.6 lb (6.2 kg, about 12 fluid ounces in volume) of Pu-239, which is only 39% of bare-metal critical mass. (See Fat Man article for a detailed drawing.) The U-238 reflected, 13.6 lb (6.2 kg) pit was sub-critical before detonation. During detonation, criticality was achieved by implosion. The plutonium pit was squeezed to increase its density by simultaneous detonation of conventional explosives placed uniformly around the pit. The explosives were detonated by multiple exploding-bridgewire detonators. It is estimated that only about 20% of the plutonium underwent fission, the rest (about 11 lb (5.0 kg) or 5 kg) was scattered.

An implosion shock wave might be of such short duration that only a fraction of the pit is compressed at any instant as the wave passes through it. A pusher shell made out of low density metal—such as aluminium, beryllium, or an alloy of the two metals (aluminium being easier and safer to shape and beryllium for its high-neutron-reflective capability) —may be needed. The pusher is located between the explosive lens and the tamper. It works by reflecting some of the shock wave backwards, thereby having the effect of lengthening its duration. Fat Man used an aluminium pusher.

The key to Fat Man's greater efficiency was the inward momentum of the massive U-238 tamper (which did not undergo fission). Once the chain reaction started in the plutonium, the momentum of the implosion had to be reversed before expansion could stop the fission. By holding everything together for a few hundred nanoseconds more, the efficiency was increased.

The core of an implosion weapon – the fissile material and any reflector or tamper bonded to it – is known as the pit. Some weapons tested during the 1950s used pits made with U-235 alone, or in composite with plutonium, but all-plutonium pits are the smallest in diameter and have been the standard since the early 1960s.

Casting and then machining plutonium is difficult not only because of its toxicity, but also because plutonium has many different metallic phases, also known as allotropes. As plutonium cools, changes in phase result in distortion. This distortion is normally overcome by alloying it with 3–3.5 molar% (0.9–1.0% by weight) gallium which causes it to take up its delta phase over a wide temperature range. When cooling from molten it then suffers only a single phase change, from epsilon to delta, instead of the four changes it would otherwise pass through. Other trivalent metals would also work, but gallium has a small neutron absorption cross section and helps protect the plutonium against corrosion. A drawback is that gallium compounds themselves are corrosive and so if the plutonium is recovered from dismantled weapons for conversion to plutonium dioxide for power reactors, there is the difficulty of removing the gallium.

Because plutonium is chemically reactive and toxic if it enters the body by inhalation or any other means, for protection of the assembler, it is common to plate the completed pit with a thin layer of inert metal. In the first weapons, nickel was used but gold is now preferred.

The first improvement on the Fat Man design was to put an air space between the tamper and the pit to create a hammer-on-nail impact. The pit, supported on a hollow cone inside the tamper cavity, was said to be levitated. The three tests of Operation Sandstone, in 1948, used Fat Man designs with levitated pits. The largest yield was 49 kilotons, more than twice the yield of the unlevitated Fat Man.

It was immediately clear that implosion was the best design for a fission weapon. Its only drawback seemed to be its diameter. Fat Man was 5 feet (1.5 m) wide vs 2 feet (60 cm)for Little Boy.

Eleven years later, implosion designs had advanced sufficiently that the 5 foot-diameter sphere of Fat Man had been reduced to a 1 foot-diameter (30 cm) cylinder 2 feet (60 cm) long, the Swan device.

The Pu-239 pit of Fat Man was only 3.6 inches (9 cm) in diameter, the size of a softball. The bulk of Fat Man's girth was the implosion mechanism, namely concentric layers of U-238, aluminium, and high explosives. The key to reducing that girth was the two-point implosion design.

A very inefficient implosion design is one that simply reshapes an ovoid into a sphere, with minimal compression. In linear implosion, an untamped, solid, elongated mass of Pu-239, larger than critical mass in a sphere, is imbedded inside a cylinder of high explosive with a detonator at each end.

Detonation makes the pit critical by driving the ends inward, creating a spherical shape. The shock may also change plutonium from delta to alpha phase, increasing its density by 23%, but without the inward momentum of a true implosion. The lack of compression makes it inefficient, but the simplicity and small diameter make it suitable for use in artillery shells and atomic demolition munitions - ADMs - also known as backpack or suitcase nukes.

All such low-yield battlefield weapons, whether gun-type U-235 designs or linear implosion Pu-239 designs, pay a high price in fissile material in order to achieve diameters between six and ten inches (254 mm) .

A more efficient two-point implosion system uses two high explosive lenses and a hollow pit.

A hollow plutonium pit was the original plan for the 1945 Fat Man bomb, but there was not enough time to develop and test the implosion system for it. A simpler solid-pit design was considered more reliable, given the time restraint, but it required a heavy U-238 tamper, a thick aluminum pusher, and three tons of high explosives.

After the war, interest in the hollow pit design was revived. Its obvious advantage is that a hollow shell of plutonium, shock-deformed and driven inward toward its empty center, would carry momentum into its violent assembly as a solid sphere. It would be self-tamping, requiring a smaller U-238 tamper, no aluminum pusher, and less high explosive. The hollow pit made levitation obsolete.

The Fat Man bomb had two concentric, spherical shells of high explosives, each about 10 inches (25 cm) thick. The inner shell drove the implosion. The outer shell consisted of a soccer-ball pattern of 32 high explosive lenses, each of which converted the convex wave from its detonator into a concave wave matching the contour of the outer surface of the inner shell. If these 32 lenses could be replaced with only two, the high explosive sphere could become an ellipsoid (prolate spheroid) with a much smaller diameter.

A good illustration of these two features is a 1956 drawing from the Swedish nuclear weapon program (which was terminated before it produced a test explosion). The drawing shows the essential elements of the two-point hollow-pit design.

There are similar drawings in the open literature that come from the post-war German nuclear bomb program, which was also terminated, and from the French program, which produced an arsenal.

The mechanism of the high explosive lens (diagram item #6) is not shown in the Swedish drawing, but a standard lens made of fast and slow high explosives, as in Fat Man, would be much longer than the shape depicted. For a single high explosive lens to generate a concave wave that envelops an entire hemisphere, it must either be very long or the part of the wave on a direct line from the detonator to the pit must be slowed dramatically.

A slow high explosive is too fast, but the flying plate of an "air lens" is not. A metal plate, shock-deformed, and pushed across an empty space can be designed to move slowly enough. A two-point implosion system using air lens technology can have a length no more than twice its diameter, as in the Swedish diagram above.

The next step in miniaturization was to speed up the fissioning of the pit to reduce the amount of time inertial confinement needed. The hollow pit provided an ideal location to introduce fusion for the boosting of fission. A 50-50 mixture of tritium and deuterium gas, pumped into the pit during arming, will fuse into helium and release free neutrons soon after fission begins. The neutrons will start a large number of new chain reactions while the pit is still critical.

Once the hollow pit is perfected, there is little reason not to boost.

The concept of fusion-boosted fission was first tested on May 25, 1951, in the Item shot of Operation Greenhouse, Eniwetok, yield 45.5 kilotons.

Since boosting is required to attain full design yield, any reduction in boosting reduces yield. Boosted weapons are thus variable-yield weapons. Yield can be reduced any time before detonation, simply by putting less than the full amount of tritium into the pit during the arming procedure.

The first device whose dimensions suggest employment of all these features (two-point, hollow-pit, fusion-boosted implosion) was the Swan device, tested June 22, 1956, as the Inca shot of Operation Redwing, at Eniwetok. Its yield was 15 kilotons, about the same as Little Boy, the Hiroshima bomb. It weighed 105 lb (47.6 kg) and was cylindrical in shape, 11.6 inches (29.5 cm) in diameter and 22.9 inches (58 cm) long. The above schematic illustrates what were probably its essential features.

Eleven days later, July 3, 1956, the Swan was test-fired again at Eniwetok, as the Mohawk shot of Redwing. This time it served as the primary, or first stage, of a two-stage thermonuclear device, a role it played in a dozen such tests during the 1950s. Swan was the first off-the-shelf, multi-use primary, and the prototype for all that followed.

After the success of Swan, 11 or 12 inches (300 mm) seemed to become the standard diameter of boosted single-stage devices tested during the 1950s. Length was usually twice the diameter, but one such device, which became the W54 warhead, was closer to a sphere, only 15 inches (380 mm) long. It was tested two dozen times in the 1957-62 period before being deployed. No other design had such a long string of test failures. Since the longer devices tended to work correctly on the first try, there must have been some difficulty in flattening the two high explosive lenses enough to achieve the desired length-to-width ratio.

One of the applications of the W54 was the Davy Crockett XM-388 recoilless rifle projectile, shown here in comparison to its Fat Man predecessor, dimensions in inches.

Another benefit of boosting, in addition to making weapons smaller, lighter, and with less fissile material for a given yield, is that it renders weapons immune to radiation interference (RI). It was discovered in the mid-1950s that plutonium pits would be particularly susceptible to partial pre-detonation if exposed to the intense radiation of a nearby nuclear explosion (electronics might also be damaged, but this was a separate issue). RI was a particular problem before effective early warning radar systems because a first strike attack might make retaliatory weapons useless. Boosting reduces the amount of plutonium needed in a weapon to below the quantity which would be vulnerable to this effect.

Pure fission or fusion-boosted fission weapons can be made to yield hundreds of kilotons, at great expense in fissile material and tritium, but by far the most efficient way to increase nuclear weapon yield beyond ten or so kilotons is to tack on a second independent stage, called a secondary.

In the 1940s, bomb designers at Los Alamos thought the secondary would be a canister of deuterium in liquified or hydride form. The fusion reaction would be D-D, harder to achieve than D-T, but more affordable. A fission bomb at one end would shock-compress and heat the near end, and fusion would propagate through the canister to the far end. Mathematical simulations showed it wouldn't work, even with large amounts of prohibitively expensive tritium added in.

The entire fusion fuel canister would need to be enveloped by fission energy, to both compress and heat it, as with the booster charge in a boosted primary. The design breakthrough came in January 1951, when Edward Teller and Stanisław Ulam invented radiation implosion - for nearly three decades known publicly only as the Teller-Ulam H-bomb secret.

The concept of radiation implosion was first tested on May 9, 1951, in the George shot of Operation Greenhouse, Eniwetok, yield 225 kilotons. The first full test was on November 1, 1952, the Mike shot of Operation Ivy, Eniwetok, yield 10.4 megatons.

In radiation implosion, the burst of x-ray energy coming from an exploding primary is captured and contained within an opaque-walled radiation channel which surrounds the nuclear energy components of the secondary. For a millionth of a second, most of the energy of several kilotons of TNT is absorbed by a plasma (superheated gas) generated from plastic foam in the radiation channel. With energy going in and not coming out, the plasma rises to solar core temperatures and expands with solar core pressures. Nearby objects which are still cool are crushed by the temperature difference.

The cool nuclear materials surrounded by the radiation channel are imploded much like the pit of the primary, except with more force. This greater pressure enables the secondary to be significantly more powerful than the primary, without being much larger.

For example, for the Redwing Mohawk test on July 3, 1956, a secondary called the Flute was attached to the Swan primary. The Flute was 15 inches (38 cm) in diameter and 23.4 inches (59 cm) long, about the size of the Swan. But it weighed ten times as much and yielded 24 times as much energy (355 kilotons, vs 15 kilotons).

Equally important, the active ingredients in the Flute probably cost no more than those in the Swan. Most of the fission came from cheap U-238, and the tritium was manufactured in place during the explosion. Only the spark plug at the axis of the secondary needed to be fissile.

A spherical secondary can achieve higher implosion densities than a cylindrical secondary, because spherical implosion pushes in from all directions toward the same spot. However, in warheads yielding more than one megaton, the diameter of a spherical secondary would be too large for most applications. A cylindrical secondary is necessary in such cases. The small, cone-shaped re-entry vehicles in multiple-warhead ballistic missiles after 1970 tended to have warheads with spherical secondaries, and yields of a few hundred kilotons.

As with boosting, the advantages of the two-stage thermonuclear design are so great that there is little incentive not to use it, once a nation has mastered the technology.

The initial impetus behind the two-stage weapon was President Truman's 1950 promise to build a 10-megaton hydrogen superbomb as America's response to the 1949 test of the first Soviet fission bomb. But the resulting invention turned out to be the cheapest and most compact way to build small nuclear bombs as well as large ones, erasing any meaningful distinction between A-bombs and H-bombs, and between boosters and supers. All the best techniques for fission and fusion explosions are incorporated into one all-encompassing, fully-scalable design principle. Even six-inch (152 mm) diameter nuclear artillery shells can be two-stage thermonuclears.

In the ensuing fifty years, nobody has come up with a better way to build a nuclear bomb. It is the design of choice for the U.S., Russia, Britain, France, and China, the five thermonuclear powers. The other nuclear-armed nations, Israel, India, Pakistan, and North Korea, probably have single-stage weapons, possibly boosted.

In a two-stage thermonuclear weapon, three types of energy emerge from the primary to impact the secondary: the expanding hot gases from high explosive charges which implode the primary, plus the electromagnetic radiation and the neutrons from the primary's nuclear detonation. An essential energy transfer modulator called the interstage, between the primary and the secondary, protects the secondary from the hot gases and channels the electromagnetic radiation and neutrons toward the right place at the right time.

There is very little information in the open literature about the mechanism of the interstage. Its first mention in a U.S. government document formally released to the public appears to be a caption in a recent graphic promoting the Reliable Replacement Warhead Program. If built, this new design would replace "toxic, brittle material" and "expensive 'special' material" in the interstage. This statement suggests the interstage may contain beryllium to moderate the flux of neutrons from the primary, and perhaps something to absorb and re-radiate the x-rays in a particular manner.

The interstage and the secondary are encased together inside a stainless steel membrane to form the canned subassembly (CSA), an arrangement which has never been depicted in any open-source drawing. The most detailed illustration of an interstage shows a British thermonuclear weapon with a cluster of items between its primary and a cylindrical secondary. They are labeled "end-cap and neutron focus lens," "reflector/neutron gun carriage," and "reflector wrap." The origin of the drawing, posted on the internet by Greenpeace, is uncertain, and there is no accompanying explanation.

All modern nuclear weapons make some use of D-T fusion. Even pure fission weapons include neutron generators which are high-voltage vacuum tubes containing trace amounts of tritium and deuterium.

However, in the public perception, hydrogen bombs, or H-bombs, are multi-megaton devices a thousand times more powerful than Hiroshima's Little Boy. Such high-yield bombs are actually two-stage thermonuclears, scaled up to the desired yield, with uranium fission, as usual, providing most of their energy.

The idea of the hydrogen bomb first came to public attention in 1949, when prominent scientists openly recommended against building nuclear bombs more powerful than the standard pure-fission model, on both moral and practical grounds. Their assumption was that critical mass considerations would limit the potential size of fission explosions, but that a fusion explosion could be as large as its supply of fuel, which has no critical mass limit. In 1949, the Soviets exploded their first fission bomb, and in 1950 President Truman ended the H-bomb debate by ordering the Los Alamos designers to build one.

In 1952, the 10.4-megaton Ivy Mike explosion was announced as the first hydrogen bomb test, reinforcing the idea that hydrogen bombs are a thousand times more powerful than fission bombs.

In 1954, J. Robert Oppenheimer was labeled a hydrogen bomb opponent. The public did not know there were two kinds of hydrogen bomb (neither of which is accurately described as a hydrogen bomb). On May 23, when his security clearance was revoked, item three of the four public findings against him was "his conduct in the hydrogen bomb program." In 1949, Oppenheimer had supported single-stage fusion-boosted fission bombs, to maximize the explosive power of the arsenal given the trade-off between plutonium and tritium production. He opposed two-stage thermonuclear bombs until 1951, when radiation implosion, which he called "technically sweet", first made them practical. He no longer objected. The complexity of his position was not revealed to the public until 1976, thirteen years after his death.

When ballistic missiles replaced bombers in the 1960s, most multi-megaton bombs were replaced by missile warheads (also two-stage thermonuclears) scaled down to one megaton or less.

The first effort to exploit the symbiotic relationship between fission and fusion was a 1940s design that mixed fission and fusion fuel in alternating thin layers. As a single-stage device, it would have been a cumbersome application of boosted fission. It first became practical when incorporated into the secondary of a two-stage thermonuclear weapon.

The U.S. name, Alarm Clock, was a nonsense code name. The Russian name for the same design was more descriptive: Sloika, a layered pastry cake. A single-stage Russian Sloika was tested on August 12, 1953. No single-stage U.S. version was tested, but the Union shot of Operation Castle, April 26, 1954, was a two-stage thermonuclear code-named Alarm Clock. Its yield, at Bikini, was 6.9 megatons.

Because the Russian Sloika test used dry lithium-6 deuteride eight months before the first U.S. test to use it (Castle Bravo, March 1, 1954), it was sometimes claimed that Russia won the H-bomb race. (The 1952 U.S. Ivy Mike test used cryogenically-cooled liquid deuterium as the fusion fuel in the secondary, and employed the D-D fusion reaction.) However, the first Russian test to use a radiation-imploded secondary, the essential feature of a true H-bomb, was on November 23, 1955, three years after Ivy Mike.

On March 1, 1954, America's largest-ever nuclear test explosion, the 15-megaton Bravo shot of Operation Castle at Bikini, delivered a promptly lethal dose of fission-product fallout to more than 6,000 square miles (16,000 km2) of Pacific Ocean surface. Radiation injuries to Marshall Islanders and Japanese fishermen made that fact public and revealed the role of fission in hydrogen bombs.

In response to the public alarm over fallout, an effort was made to design a clean multi-megaton weapon, relying almost entirely on fusion. Since the energy produced by fission is essentially free, using the vital tamper as a source of extra energy the clean bomb needed to be much larger for the same yield. For the only time, a third stage, called the tertiary, was added, using the secondary as its primary. The device was called Bassoon. It was tested as the Zuni shot of Operation Redwing, at Bikini on May 28, 1956. With all the uranium in Bassoon replaced with a substitute material such as lead, its yield was 3.5 megatons, 85% fusion and only 15% fission.

On July 19, AEC Chairman Lewis Strauss said the clean bomb test "produced much of importance . . . from a humanitarian aspect." However, two days later the dirty version of Bassoon, with the uranium parts restored, was tested as the Tewa shot of Redwing. Its 5-megaton yield, 87% fission, was deliberately suppressed to keep fallout within a smaller area. This dirty version was later deployed as the three-stage, 25-megaton Mark-41 bomb, which was carried by U.S. Air Force bombers, but never tested at full yield.

As such, high-yield clean bombs were a public relations exercise. The actual deployed weapons were the dirty version, which maximized yield for the same size device.

Such "salted" weapons were requested by the U.S. Air Force and seriously investigated, possibly built and tested, but not deployed. In the 1964 edition of the DOD/AEC book The Effects of Nuclear Weapons, a new section titled Radiological Warfare clarified the issue. Fission products are as deadly as neutron-activated cobalt. The standard high-fission thermonuclear weapon is automatically a weapon of radiological warfare, as dirty as a cobalt bomb.

Initially, gamma radiation from the fission products from an equivalent size fission-fusion-fission bomb are much more intense than Co-60: 15,000 times more intense at 1 hour; 35 times more intense at 1 week; 5 times more intense at 1 month; and about equal at 6 months. Thereafter fission drops off rapidly so that Co-60 fallout is 8 times more intense than fission at 1 year and 150 times more intense at 5 years. The very long lived isotopes produced by fission would overtake the 60Co again after about 75 years.

In 1954, to explain the surprising amount of fission-product fallout produced by hydrogen bombs, Ralph Lapp coined the term fission-fusion-fission to describe a process inside what he called a three-stage thermonuclear weapon. His process explanation was correct, but his choice of terms caused confusion in the open literature. The stages of a nuclear weapon are not fission, fusion, and fission. They are the primary, the secondary, and, in one exceptionally powerful weapon, the tertiary. Each of these stages employs fission, fusion, and fission.

A neutron bomb, technically referred to as an enhanced radiation weapon (ERW), is a type of tactical nuclear weapon designed specifically to release a large portion of its energy as energetic neutron radiation. This contrasts with standard thermonuclear weapons, which are designed to capture this intense neutron radiation to increase its overall explosive yield. In terms of yield, ERWs typically produce about one-tenth that of a fission-type atomic weapon. Even with their significantly lower explosive power, ERWs are still capable of much greater destruction than any conventional bomb. Meanwhile, relative to other nuclear weapons, damage is more focused on biological material than on material infrastructure (though extreme blast and heat effects are not eliminated).

Officially known as enhanced radiation weapons, ERWs, they are more accurately described as suppressed yield weapons. When the yield of a nuclear weapon is less than one kiloton, its lethal radius from blast, 700 m (2300 ft), is less than that from its neutron radiation. However, the blast is more than potent enough to destroy most structures, which are less resistant to blast effects than even unprotected human beings. Blast pressures of upwards of 20 PSI are survivable, whereas most buildings will collapse with a pressure of only 5 PSI.

ERWs were two-stage thermonuclears with all non-essential uranium removed to minimize fission yield. Fusion provided the neutrons. Developed in the 1950s, they were first deployed in the 1970s, by U.S. forces in Europe. The last ones were retired in the 1990s.

A neutron bomb is only feasible if the yield is sufficiently high that efficient fusion stage ignition is possible, and if the yield is low enough that the case thickness will not absorb too many neutrons. This means that neutron bombs have a yield range of 1–10 kilotons, with fission proportion varying from 50% at 1-kiloton to 25% at 10-kilotons (all of which comes from the primary stage). The neutron output per kiloton is then 10–15 times greater than for a pure fission implosion weapon or for a strategic warhead like a W87 or W88.

In 1999, nuclear weapon design was in the news again, for the first time in decades. In January, the U.S. House of Representatives released the Cox Report (Christopher Cox R-CA) which alleged that China had somehow acquired classified information about the U.S. W88 warhead. Nine months later, Wen Ho Lee, a Taiwanese immigrant working at Los Alamos, was publicly accused of spying, arrested, and served nine months in pre-trial detention, before the case against him was dismissed. It is not clear that there was, in fact, any espionage.

In the course of eighteen months of news coverage, the W88 warhead was described in unusual detail. The New York Times printed a schematic diagram on its front page. The most detailed drawing appeared in A Convenient Spy, the 2001 book on the Wen Ho Lee case by Dan Stober and Ian Hoffman, adapted and shown here with permission.

Designed for use on Trident II (D-5) submarine-launched ballistic missiles, the W88 entered service in 1990 and was the last warhead designed for the U.S. arsenal. It has been described as the most advanced, although open literature accounts do not indicate any major design features that were not available to U.S. designers in 1958.

The above diagram shows all the standard features of ballistic missile warheads since the 1960s, with two exceptions that give it a higher yield for its size.

Notice that the alternating layers of fission and fusion material in the secondary are an application of the Alarm Clock/Sloika principle.

The United States has not produced any nuclear warheads since 1989, when the Rocky Flats pit production plant, near Boulder, Colorado, was shut down for environmental reasons. With the end of the Cold War two years later, the production line was idled except for inspection and maintenance functions.

The National Nuclear Security Administration, the latest successor for nuclear weapons to the Atomic Energy Commission and the Department of Energy, has proposed building a new pit facility and starting the production line for a new warhead called the Reliable Replacement Warhead (RRW). Two advertised safety improvements of the RRW would be a return to the use of "insensitive high explosives which are far less susceptible to accidental detonation", and the elimination of "certain hazardous materials, such as beryllium, that are harmful to people and the environment." Since the new warhead would not require any nuclear testing, it could not use a new design with untested concepts.

All the nuclear weapon design innovations discussed in this article originated from the following three labs in the manner described. Other nuclear weapon design labs in other countries duplicated those design innovations independently, reverse-engineered them from fallout analysis, or acquired them by espionage.

The first systematic exploration of nuclear weapon design concepts took place in the summer of 1942 at the University of California, Berkeley. Important early discoveries had been made at the adjacent Lawrence Berkeley Laboratory, such as the 1940 cyclotron made production and isolation of plutonium. A Berkeley professor, J. Robert Oppenheimer, had just been hired to run the nation's secret bomb design effort. His first act was to convene the 1942 summer conference.

By the time he moved his operation to the new secret town of Los Alamos, New Mexico, in the spring of 1943, the accumulated wisdom on nuclear weapon design consisted of five lectures by Berkeley professor Robert Serber, transcribed and distributed as the Los Alamos Primer. The Primer addressed fission energy, neutron production and capture, nuclear chain reactions, critical mass, tampers, predetonation, and three methods of assembling a bomb: gun assembly, implosion, and "autocatalytic methods," the one approach that turned out to be a dead end.

At Los Alamos, it was found in April 1944 by Emilio G. Segrè that the proposed Thin Man Gun assembly type bomb would not work for plutonium because of predetonation problems caused by Pu-240 impurities. So Fat Man, the implosion-type bomb, was given high priority as the only option for plutonium. The Berkeley discussions had generated theoretical estimates of critical mass, but nothing precise. The main wartime job at Los Alamos was the experimental determination of critical mass, which had to wait until sufficient amounts of fissile material arrived from the production plants: uranium from Oak Ridge, Tennessee, and plutonium from the Hanford site in Washington.

In 1945, using the results of critical mass experiments, Los Alamos technicians fabricated and assembled components for four bombs: the Trinity Gadget, Little Boy, Fat Man, and an unused spare Fat Man. After the war, those who could, including Oppenheimer, returned to university teaching positions. Those who remained worked on levitated and hollow pits and conducted weapon effects tests such as Crossroads Able and Baker at Bikini Atoll in 1946.

All of the essential ideas for incorporating fusion into nuclear weapons originated at Los Alamos between 1946 and 1952. After the Teller-Ulam radiation implosion breakthrough of 1951, the technical implications and possibilities were fully explored, but ideas not directly relevant to making the largest possible bombs for long-range Air Force bombers were shelved.

Because of Oppenheimer's initial position in the H-bomb debate, in opposition to large thermonuclear weapons, and the assumption that he still had influence over Los Alamos despite his departure, political allies of Edward Teller decided he needed his own laboratory in order to pursue H-bombs. By the time it was opened in 1952, in Livermore, California, Los Alamos had finished the job Livermore was designed to do.

With its original mission no longer available, the Livermore lab tried radical new designs, that failed. Its first three nuclear tests were fizzles: in 1953, two single-stage fission devices with uranium hydride pits, and in 1954, a two-stage thermonuclear device in which the secondary heated up prematurely, too fast for radiation implosion to work properly.

Shifting gears, Livermore settled for taking ideas Los Alamos had shelved and developing them for the Army and Navy. This led Livermore to specialize in small-diameter tactical weapons, particularly ones using two-point implosion systems, such as the Swan. Small-diameter tactical weapons became primaries for small-diameter secondaries. Around 1960, when the superpower arms race became a ballistic missile race, Livermore warheads were more useful than the large, heavy Los Alamos warheads. Los Alamos warheads were used on the first intermediate-range ballistic missiles, IRBMs, but smaller Livermore warheads were used on the first intercontinental ballistic missiles, ICBMs, and submarine-launched ballistic missiles, SLBMs, as well as on the first multiple warhead systems on such missiles.

In 1957 and 1958 both labs built and tested as many designs as possible, in anticipation that a planned 1958 test ban might become permanent. By the time testing resumed in 1961 the two labs had become duplicates of each other, and design jobs were assigned more on workload considerations than lab specialty. Some designs were horse-traded. For example, the W38 warhead for the Titan I missile started out as a Livermore project, was given to Los Alamos when it became the Atlas missile warhead, and in 1959 was given back to Livermore, in trade for the W54 Davy Crockett warhead, which went from Livermore to Los Alamos.

The period of real innovation was ending by then, anyway. Warhead designs after 1960 took on the character of model changes, with every new missile getting a new warhead for marketing reasons. The chief substantive change involved packing more fissile uranium into the secondary, as it became available with continued uranium enrichment and the dismantlement of the large high-yield bombs.

Nuclear weapons are in large part designed by trial and error. The trial often involves test explosion of a prototype.

In a nuclear explosion, a large number of discrete events, with various probabilities, aggregate into short-lived, chaotic energy flows inside the device casing. Complex mathematical models are required to approximate the processes, and in the 1950s there were no computers powerful enough to run them properly. Even today's computers and simulation software are not adequate.

It was easy enough to design reliable weapons for the stockpile. If the prototype worked, it could be weaponized and mass produced.

It was much more difficult to understand how it worked or why it failed. Designers gathered as much data as possible during the explosion, before the device destroyed itself, and used the data to calibrate their models, often by inserting fudge factors into equations to make the simulations match experimental results. They also analyzed the weapon debris in fallout to see how much of a potential nuclear reaction had taken place.

An important tool for test analysis was the diagnostic light pipe. A probe inside a test device could transmit information by heating a plate of metal to incandescence, an event that could be recorded at the far end of a long, very straight pipe.

The picture below shows the Shrimp device, detonated on March 1, 1954 at Bikini, as the Castle Bravo test. Its 15-megaton explosion was the largest ever by the United States. The silhouette of a man is shown for scale. The device is supported from below, at the ends. The pipes going into the shot cab ceiling, which appear to be supports, are diagnostic light pipes. The eight pipes at the right end (1) sent information about the detonation of the primary. Two in the middle (2) marked the time when x-radiation from the primary reached the radiation channel around the secondary. The last two pipes (3) noted the time radiation reached the far end of the radiation channel, the difference between (2) and (3) being the radiation transit time for the channel.

From the shot cab, the pipes turned horizontal and traveled 7500 ft (2.3 km), along a causeway built on the Bikini reef, to a remote-controlled data collection bunker on Namu Island.

While x-rays would normally travel at the speed of light through a low density material like the plastic foam channel filler between (2) and (3), the intensity of radiation from the exploding primary created a relatively opaque radiation front in the channel filler which acted like a slow-moving logjam to retard the passage of radiant energy. Behind this moving front was a fully-ionized, low-z (low atomic number) plasma heated to 20,000 °C, soaking up energy like a black box, and eventually driving the implosion of the secondary.

The radiation transit time, on the order of half a microsecond, is the time it takes the entire radiation channel to reach thermal equilibrium as the radiation front moves down its length. The implosion of the secondary is based on the temperature difference between the hot channel and the cool interior of the secondary. Its timing is important because the interior of the secondary is subject to neutron preheat.

While the radiation channel is heating and starting the implosion, neutrons from the primary catch up with the x-rays, penetrate into the secondary and start breeding tritium with the third reaction noted in the first section above. This Li-6 + n reaction is exothermic, producing 5 MeV per event. The spark plug is not yet compressed and thus is not critical, so there won't be significant fission or fusion. But if enough neutrons arrive before implosion of the secondary is complete, the crucial temperature difference will be degraded. This is the reported cause of failure for Livermore's first thermonuclear design, the Morgenstern device, tested as Castle Koon, April 7, 1954.

These timing issues are measured by light-pipe data. The mathematical simulations which they calibrate are called radiation flow hydrodynamics codes, or channel codes. They are used to predict the effect of future design modifications.

The most interesting data from Castle Bravo came from radio-chemical analysis of weapon debris in fallout. Because of a shortage of enriched lithium-6, 60% of the lithium in the Shrimp secondary was ordinary lithium-7, which doesn't breed tritium as easily as lithium-6 does. But it does breed lithium-6 as the product of an (n, 2n) reaction (one neutron in, two neutrons out), a known fact, but with unknown probability. The probability turned out to be high.

Fallout analysis revealed to designers that, with the (n, 2n) reaction, the Shrimp secondary effectively had two and half times as much lithium-6 as expected. The tritium, the fusion yield, the neutrons, and the fission yield were all increased accordingly.

As noted above, Bravo's fallout analysis also told the outside world, for the first time, that thermonuclear bombs are more fission devices than fusion devices. A Japanese fishing boat, the Lucky Dragon, sailed home with enough fallout on its decks to allow scientists in Japan and elsewhere to determine, and announce, that most of the fallout had come from the fission of U-238 by fusion-produced 14 MeV neutrons.

The global alarm over radioactive fallout, which began with the Castle Bravo event, eventually drove nuclear testing underground. The last U.S. above-ground test took place at Johnston Island on November 4, 1962. During the next three decades, until September 23, 1992, the U.S. conducted an average of 2.4 underground nuclear explosions per month, all but a few at the Nevada Test Site (NTS) northwest of Las Vegas.

The Yucca Flat section of the NTS is covered with subsidence craters resulting from the collapse of terrain over radioactive underground caverns created by nuclear explosions (see photo).

After the 1974 Threshold Test Ban Treaty (TTBT), which limited underground explosions to 150 kilotons or less, warheads like the half-megaton W88 had to be tested at less than full yield. Since the primary must be detonated at full yield in order to generate data about the implosion of the secondary, the reduction in yield had to come from the secondary. Replacing much of the lithium-6 deuteride fusion fuel with lithium-7 hydride limited the deuterium available for fusion, and thus the overall yield, without changing the dynamics of the implosion. The functioning of the device could be evaluated using light pipes, other sensing devices, and analysis of trapped weapon debris. The full yield of the stockpiled weapon could be calculated by extrapolation.

When two-stage weapons became standard in the early 1950s, weapon design determined the layout of America's new, widely dispersed production facilities, and vice versa.

Because primaries tend to be bulky, especially in diameter, plutonium is the fissile material of choice for pits, with beryllium reflectors. It has a smaller critical mass than uranium. The Rocky Flats plant in Boulder, Colorado, was built in 1952 for pit production and consequently became the plutonium and beryllium fabrication facility.

The Y-12 plant in Oak Ridge, Tennessee, where mass spectrometers called Calutrons had enriched uranium for the Manhattan Project, was redesigned to make secondaries. Fissile U-235 makes the best spark plugs because its critical mass is larger, especially in the cylindrical shape of early thermonuclear secondaries. Early experiments used the two fissile materials in combination, as composite Pu-Oy pits and spark plugs, but for mass production, it was easier to let the factories specialize: plutonium pits in primaries, uranium spark plugs and pushers in secondaries.

Y-12 made lithium-6 deuteride fusion fuel and U-238 parts, the other two ingredients of secondaries.

The Savannah River plant in Aiken, South Carolina, also built in 1952, operated nuclear reactors which converted U-238 into Pu-239 for pits, and lithium-6 (produced at Y-12) into tritium for booster gas. Since its reactors were moderated with heavy water, deuterium oxide, it also made deuterium for booster gas and for Y-12 to use in making lithium-6 deuteride.

It is inherently dangerous to have a weapon containing a quantity and shape of fissile material which can form a critical mass through a relatively simple accident. Because of this danger, the high explosives in Little Boy (four bags of Cordite powder) were inserted into the bomb in flight, shortly after takeoff on August 6, 1945. It was the first time a gun-type nuclear weapon had ever been fully assembled.

Also, if the weapon falls into water, the moderating effect of the water can also cause a criticality accident, even without the weapon being physically damaged.

Gun-type weapons have always been inherently unsafe.

Neither of these effects is likely with implosion weapons since there is normally insufficient fissile material to form a critical mass without the correct detonation of the lenses. However, the earliest implosion weapons had pits so close to criticality that accidental detonation with some nuclear yield was a concern.

On August 9, 1945, Fat Man was loaded onto its airplane fully assembled, but later, when levitated pits made a space between the pit and the tamper, it was feasible to utilize in-flight pit insertion. The bomber would take off with no fissile material in the bomb. Some older implosion-type weapons, such as the US Mark 4 and Mark 5, used this system.

In-flight pit insertion will not work with a hollow pit in contact with its tamper.

As shown in the diagram, one method used to decrease the likelihood of accidental detonation used metal balls. The balls were emptied into the pit; this would prevent detonation by increasing density of the hollowed pit. This design was used in the Green Grass weapon, also known as the Interim Megaton Weapon and was also used in Violet Club and the Yellow Sun Mk.1 bombs.

Alternatively, the pit can be "safed" by having its normally-hollow core filled with an inert material such as a fine metal chain, possibly made of cadmium to absorb neutrons. While the chain is in the center of the pit, the pit can not be compressed into an appropriate shape to fission; when the weapon is to be armed, the chain is removed. Similarly, although a serious fire could detonate the explosives, destroying the pit and spreading plutonium to contaminate the surroundings as has happened in several weapons accidents, it could not however, cause a nuclear explosion.

The US W47 warhead used in Polaris A1 and Polaris A2 had a safety device consisting of a boron-coated-wire inserted into the hollow pit at manufacture. The warhead was armed by withdrawing the wire onto a spool driven by an electric motor. However, once withdrawn the wire could not be re-inserted.

While the firing of one detonator out of many will not cause a hollow pit to go critical, especially a low-mass hollow pit that requires boosting, the introduction of two-point implosion systems made that possibility a real concern.

In a two-point system, if one detonator fires, one entire hemisphere of the pit will implode as designed. The high-explosive charge surrounding the other hemisphere will explode progressively, from the equator toward the opposite pole. Ideally, this will pinch the equator and squeeze the second hemisphere away from the first, like toothpaste in a tube. By the time the explosion envelops it, its implosion will be separated both in time and space from the implosion of the first hemisphere. The resulting dumbbell shape, with each end reaching maximum density at a different time, may not become critical.

Unfortunately, it is not possible to tell on the drawing board how this will play out. Nor is it possible using a dummy pit of U-238 and high-speed x-ray cameras, although such tests are helpful. For final determination, a test needs to be made with real fissile material. Consequently, starting in 1957, a year after Swan, both labs began one-point safety tests.

Out of 25 one-point safety tests conducted in 1957 and 1958, seven had zero or slight nuclear yield (success), three had high yields of 300 t to 500 t (severe failure), and the rest had unacceptable yields between those extremes.

Of particular concern was Livermore's W47 warhead for the Polaris submarine missile. The last test before the 1958 moratorium was a one-point test of the W47 primary, which had an unacceptably high nuclear yield of 400 lb (180 kg) of TNT equivalent (Hardtack II Titania). With the test moratorium in force, there was no way to refine the design and make it inherently one-point safe. Los Alamos had a suitable primary that was one-point safe, but rather than share with Los Alamos the credit for designing the first SLBM warhead, Livermore chose to use mechanical safing on its own inherently unsafe primary. The wire safety scheme described above was the result.

It turns out that the W47 may have been safer than anticipated. The wire-safety system may have rendered most of the warheads "duds," unable to fire when detonated.

When testing resumed in 1961, and continued for three decades, there was sufficient time to make all warhead designs inherently one-point safe, without need for mechanical safing.

In addition to the above steps to reduce the probability of a nuclear detonation arrising from a single fault, locking mechanisms referred to by NATO states as Permissive Action Links are sometimes attached to the control mechanisms for nuclear warheads. Permissive Action Links act solely to prevent an unauthorised use of a nuclear weapon.

To the top



Computer-aided design

A short animation of CAD software in action.

Computer-Aided Design (CAD) is the use of computer technology to aid in the design and particularly the drafting (technical drawing and engineering drawing) of a part or product, including entire buildings. It is both a visual (or drawing) and symbol-based method of communication whose conventions are particular to a specific technical field.

Drafting can be done in two dimensions ("2D") and three dimensions ("3D").

Drafting is the communication of technical or engineering drawings and is the industrial arts sub-discipline that underlies all involved technical endeavors. In representing complex, three-dimensional objects in two-dimensional drawings, these objects have traditionally been represented by three projected views at right angles.

Current Computer-Aided Design software packages range from 2D vector-based drafting systems to 3D solid and surface modellers. Modern CAD packages can also frequently allow rotations in three dimensions, allowing viewing of a designed object from any desired angle, even from the inside looking out. Some CAD software is capable of dynamic mathematic modeling, in which case it may be marketed as CADD — computer-aided design and drafting.

CAD is used in the design of tools and machinery and in the drafting and design of all types of buildings, from small residential types (houses) to the largest commercial and industrial structures (hospitals and factories).

CAD is mainly used for detailed engineering of 3D models and/or 2D drawings of physical components, but it is also used throughout the engineering process from conceptual design and layout of products, through strength and dynamic analysis of assemblies to definition of manufacturing methods of components.

CAD has become an especially important technology within the scope of computer-aided technologies, with benefits such as lower product development costs and a greatly shortened design cycle. CAD enables designers to lay out and develop work on screen, print it out and save it for future editing, saving time on their drawings.

Originally software for Computer-Aided Design systems was developed with computer languages such as Fortran, but with the advancement of object-oriented programming methods this has radically changed. Typical modern parametric feature based modeler and freeform surface systems are built around a number of key C (programming language) modules with their own APIs. A CAD system can be seen as built up from the interaction of a graphical user interface (GUI) with NURBS geometry and/or boundary representation (B-rep) data via a geometric modeling kernel. A geometry constraint engine may also be employed to manage the associative relationships between geometry, such as wireframe geometry in a sketch or components in an assembly.

Unexpected capabilities of these associative relationships have led to a new form of prototyping called digital prototyping. In contrast to physical prototypes, which entail manufacturing time and material costs, digital prototypes allow for design verification and testing on screen, speeding time-to-market and decreasing costs. As technology evolves in this way, CAD has moved beyond a documentation tool (representing designs in graphical format) into a more robust designing tool that assists in the design process.

Today most Computer-Aided Design computers are Windows based PCs. Some CAD systems also run on one of the Unix operating systems and with Linux. Some CAD systems such as QCad, NX or CATIA V5 provide multiplatform support including Windows, Linux, UNIX and Mac OS X.

Generally no special hardware is required with the possible exception of a good graphics card, depending on the CAD software used. However for complex product design, machines with high speed (and possibly multiple) CPUs and large amounts of RAM are recommended. CAD was an application that benefited from the installation of a numeric coprocessor especially in early personal computers. The human-machine interface is generally via a computer mouse but can also be via a pen and digitizing graphics tablet. Manipulation of the view of the model on the screen is also sometimes done with the use of a spacemouse/SpaceBall. Some systems also support stereoscopic glasses for viewing the 3D model.

Computer-Aided Design is one of the many tools used by engineers and designers and is used in many ways depending on the profession of the user and the type of software in question. There are several different types of CAD. Each of these different types of CAD systems require the operator to think differently about how he or she will use them and he or she must design their virtual components in a different manner for each.

There are many producers of the lower-end 2D systems, including a number of free and open source programs. These provide an approach to the drawing process without all the fuss over scale and placement on the drawing sheet that accompanied hand drafting, since these can be adjusted as required during the creation of the final draft.

3D wireframe is basically an extension of 2D drafting. Each line has to be manually inserted into the drawing. The final product has no mass properties associated with it and cannot have features directly added to it, such as holes. The operator approaches these in a similar fashion to the 2D systems, although many 3D systems allow using the wireframe model to make the final engineering drawing views.

3D "dumb" solids (programs incorporating this technology include AutoCAD and Cadkey 19) are created in a way analogous to manipulations of real world objects. Basic three-dimensional geometric forms (prisms, cylinders, spheres, and so on) have solid volumes added or subtracted from them, as if assembling or cutting real-world objects. Two-dimensional projected views can easily be generated from the models. Basic 3D solids don't usually include tools to easily allow motion of components, set limits to their motion, or identify interference between components.

3D parametric solid modeling (programs incorporating this technology include Pro/ENGINEER, NX, the combination of UniGraphics and IDeas, CATIA V5, Autodesk Inventor, Alibre Design, TopSolid, T-FLEX CAD, SolidWorks, and Solid Edge) require the operator to use what is referred to as "design intent". The objects and features created are adjustable. Any future modifications will be simple, difficult, or nearly impossible, depending on how the original part was created. One must think of this as being a "perfect world" representation of the component. If a feature was intended to be located from the center of the part, the operator needs to locate it from the center of the model, not, perhaps, from a more convenient edge or an arbitrary point, as he could when using "dumb" solids. Parametric solids require the operator to consider the consequences of his actions carefully.

Draft views are able to be generated easily from the models. Assemblies usually incorporate tools to represent the motions of components, set their limits, and identify interference. The tool kits available for these systems are ever increasing; including 3D piping and injection mold designing packages.

Top end systems offer the capabilities to incorporate more organic, aesthetics and ergonomic features into designs (Catia, GenerativeComponents). Freeform surface modelling is often combined with solids to allow the designer to create products that fit the human form and visual requirements as well as they interface with the machine.

Starting in the late 1980s, the development of readily affordable Computer-Aided Design programs that could be run on personal computers began a trend of massive downsizing in drafting departments in many small to mid-size companies. As a general rule, one CAD operator could readily replace at least three to five drafters using traditional methods. Additionally, many engineers began to do their own drafting work, further eliminating the need for traditional drafting departments. This trend mirrored that of the elimination of many office jobs traditionally performed by a secretary as word processors, spreadsheets, databases, etc. became standard software packages that "everyone" was expected to learn.

Another consequence had been that since the latest advances were often quite expensive, small and even mid-size firms often could not compete against large firms who could use their computational edge for competitive purposes. Today, however, hardware and software costs have come down. Even high-end packages work on less expensive platforms and some even support multiple platforms. The costs associated with CAD implementation now are more heavily weighted to the costs of training in the use of these high level tools, the cost of integrating a CAD/CAM/CAE PLM using enterprise across multi-CAD and multi-platform environments and the costs of modifying design work flows to exploit the full advantage of CAD tools.

To the top



Design

A Macintosh laptop computer. In some design fields, personal computers are used for both design and production

Design is used both as a noun and a verb. The term is often tied to the various applied arts and engineering (See design disciplines below). As a verb, "to design" refers to the process of originating and developing a plan for a product, structure, system, or component with intention. As a noun, "a design" is used for either the final (solution) plan (e.g. proposal, drawing, model, description) or the result of implementing that plan in the form of the final product of a design process. This classification aside, in its broadest sense no other limitations exist and the final product can be anything from socks and jewellery to graphical user interfaces and charts. Even virtual concepts such as corporate identity and cultural traditions such as celebration of certain holidays are sometimes designed. More recently, processes (in general) have also been treated as products of design, giving new meaning to the term "process design".

The person designing is called a designer, which is also a term used for people who work professionally in one of the various design areas, usually also specifying which area is being dealt with (such as a fashion designer, concept designer or web designer). Designing often requires a designer to consider the aesthetic, functional, and many other aspects of an object or a process, which usually requires considerable research, thought, modeling, interactive adjustment, and re-design.

Being defined so broadly, there is no universal language or unifying institution for designers of all disciplines. This allows for many differing philosophies and approaches toward the subject. However, serious study of design demands increased focus on the design process.

Design as a process can take many forms depending on the object being designed and the individual or individuals participating.

According to video game developer Dino Dini in a talk given at the 2005 Game Design and Technology Workshop held by Liverpool JM University, design underpins every form of creation from objects such as chairs to the way we plan and execute our lives. For this reason it is useful to seek out some common structure that can be applied to any kind of design, whether this be for video games, consumer products or one's own personal life.

For such an important concept, the question "What is Design?" appears to yield answers with limited usefulness. Dino Dini states that the design process can be defined as "The management of constraints". He identifies two kinds of constraint, negotiable and non-negotiable. The first step in the design process is the identification, classification and selection of constraints. The process of design then proceeds from here by manipulating design variables so as to satisfy the non-negotiable constraints and optimizing those which are negotiable. It is possible for a set of non-negotiable constraints to be in conflict resulting in a design with no solution; in this case the non-negotiable constraints must be revised. For example, take the design of a chair. A chair must support a certain weight to be useful, and this is a non-negotiable constraint. The cost of producing the chair might be another. The choice of materials and the aesthetic qualities of the chair might be negotiable.

Dino Dini theorizes that poor designs occur as a result of mismanaged constraints, something he claims can be seen in the way the video game industry makes "Must be Fun" a negotiable constraint where he believes it should be non-negotiable.

It should be noted that "the management of constraints" may not include the whole of what is involved in "constraint management" as defined in the context of a broader Theory of Constraints, depending on the scope of a design or a designer's position.

Something that is redesigned requires a different process than something that is designed for the first time. A redesign often includes an evaluation of the existent design and the findings of the redesign needs are often the ones that drive the redesign process.

A design process may include a series of steps followed by designers. Depending on the product or service, some of these stages may be irrelevant, ignored in real-world situations in order to save time, reduce cost, or because they may be redundant in the situation.

There are countless philosophies for guiding design as the design values and its accompanying aspects within modern design vary, both between different schools of thought and among practicing designers. Design philosophies are usually for determining design goals. A design goal may range from solving the least significant individual problem of the smallest element to the most holistic influential utopian goals. Design goals are usually for guiding design. However, conflicts over immediate and minor goals may lead to questioning the purpose of design, perhaps to set better long term or ultimate goals.

A design philosophy is a guide to help make choices when designing such as ergonomics, costs, economics, functionality and methods of re-design. An example of a design philosophy is “dynamic change” to achieve the elegant or stylish look you need.

A design approach is a general philosophy that may or may not include a guide for specific methods. Some are to guide the overall goal of the design. Other approaches are to guide the tendencies of the designer. A combination of approaches may be used if they don't conflict.

In philosophy, the abstract noun "design" refers to a pattern with a purpose. Design is thus contrasted with purposelessness, randomness, or lack of complexity.

To study the purpose of designs, beyond individual goals (e.g. marketing, technology, education, entertainment, hobbies), is to question the controversial politics, morals, ethics and needs such as Maslow's hierarchy of needs. "Purpose" may also lead to existential questions such as religious morals and teleology. These philosophies for the "purpose of" designs are in contrast to philosophies for guiding design or methodology.

Often a designer (especially in commercial situations) is not in a position to define purpose. Whether a designer is, is not, or should be concerned with purpose or intended use beyond what they are expressly hired to influence, is debatable, depending on the situation. Not understanding or disinterest in the wider role of design in society might also be attributed to the commissioning agent or client, rather than the designer.

In structuration theory, achieving consensus and fulfillment of purpose is as continuous as society. Raised levels of achievement often lead to raised expectations. design is both medium and outcome generating a Janus like face, with every ending marking a new beginning.

The word "design" is often considered ambiguous depending on the application.

Design is often viewed as a more rigorous form of art, or art with a clearly defined purpose. The distinction is usually made when someone other than the artist is defining the purpose. In graphic arts the distinction is often made between fine art and commercial art.

In the realm of the arts, design is more relevant to the "applied" arts, such as architecture and industrial design. In fact today the term design is widely associated to modern industrial product design as initiated by Raymond Loewy and teachings at the Bauhaus and Ulm School of Design (HfG Ulm) in Germany during the 20th Century.

Design implies a conscious effort to create something that is both functional and aesthetically pleasing. For example, a graphic artist may design an advertisement poster. This person's job is to communicate the advertisement message (functional aspect) and to make it look good (aesthetically pleasing). The distinction between pure and applied arts is not completely clear, but one may consider Jackson Pollock's (often criticized as "splatter") paintings as an example of pure art. One may assume his art does not convey a message based on the obvious differences between an advertisement poster and the mere possibility of an abstract message of a Jackson Pollock painting. One may speculate that Pollock, when painting, worked more intuitively than would a graphic artist, when consciously designing a poster. However, Mark Getlein suggests the principles of design are "almost instinctive", "built-in", "natural", and part of "our sense of 'rightness'." Pollock, as a trained artist, may have utilized design whether conscious or not.

Engineering is often viewed as a more rigorous form of design. Contrary views suggest that design is a component of engineering aside from production and other operations which utilize engineering. A neutral view may suggest that both design and engineering simply overlap, depending on the discipline of design. The American Heritage Dictionary defines design as: "To conceive or fashion in the mind; invent," and "To formulate a plan", and defines engineering as: "The application of scientific and mathematical principles to practical ends such as the design, manufacture, and operation of efficient and economical structures, machines, processes, and systems.". Both are forms of problem-solving with a defined distinction being the application of "scientific and mathematical principles". How much science is applied in a design is a question of what is considered "science". Along with the question of what is considered science, there is social science versus natural science. Scientists at Xerox PARC made the distinction of design versus engineering at "moving minds" versus "moving atoms".

The relationship between design and production is one of planning and executing. In theory, the plan should anticipate and compensate for potential problems in the execution process. Design involves problem-solving and creativity. In contrast, production involves a routine or pre-planned process. A design may also be a mere plan that does not include a production or engineering process, although a working knowledge of such processes is usually expected of designers. In some cases, it may be unnecessary and/or impractical to expect a designer with a broad multidisciplinary knowledge required for such designs to also have a detailed knowledge of how to produce the product.

Design and production are intertwined in many creative professional careers, meaning problem-solving is part of execution and the reverse. As the cost of rearrangement increases, the need for separating design from production increases as well. For example, a high-budget project, such as a skyscraper, requires separating (design) architecture from (production) construction. A Low-budget project, such as a locally printed office party invitation flyer, can be rearranged and printed dozens of times at the low cost of a few sheets of paper, a few drops of ink, and less than one hour's pay of a desktop publisher.

This is not to say that production never involves problem-solving or creativity, nor that design always involves creativity. Designs are rarely perfect and are sometimes repetitive. The imperfection of a design may task a production position (e.g. production artist, construction worker) with utilizing creativity or problem-solving skills to compensate for what was overlooked in the design process. Likewise, a design may be a simple repetition (copy) of a known preexisting solution, requiring minimal, if any, creativity or problem-solving skills from the designer.

To the top



Interior design

Interior Design is a profession concerned with anything that is found inside a space - walls, windows, doors, finishes, textures, light, furnishings and furniture. All of these elements are used by interior designers to develop a functional, safe, and aesthetically pleasing space for a building's user.

The work of an interior designer draws upon many disciplines including environmental psychology, architecture, product design, and traditional decoration (aesthetics and cosmetics). They plan the spaces of almost every type of building including: hotels, corporate spaces, schools, hospitals, private residences, shopping malls, restaurants, theaters, and airport terminals. Today, interior designers must be attuned to architectural detailing including floor plans, home renovations, and construction codes. Some interior designers are architects too!

The role of a designer probably came into existence in the 1720s in Western Europe, mostly being performed by men of diverse backgrounds. William Kent, who was trained as a history painter, is often cited as the first person to take charge of an entire interior, including internal architecture, furniture selection, and the hanging of paintings.

In London, this role was frequently filled by the upholsterer (sometimes called the upholder), while in Paris the marchand-mercier (a "merchant of goods" who acts as general contractor) often filled this role. Architects both in Great Britain and on the European continent also often served as interior designers. Robert Adam, the neoclassical architect, is perhaps the most well-know late-century example of an architect who took on entire interiors, down to the doorknobs and fire-irons. Other 18th-century men who filled the role of interior designer include Sir William Chambers, James Wyatt and Dominique Daguerre (marchand-mercier who immigrated to England).

Modern interior decorators began with Lenygon and Morant in London, Charles Alavoine and Jeanselme in Paris, and Herter Brothers (from 1864) and Elsie De Wolfe and Ogden Codman in New York.

Interior designers can specialize in a particular interior design discipline, such as residential and commercial design. Commercial design includes offices, hotels, schools, hospitals or other public buildings. Some interior designers develop expertise within a niche design area such as hospitality, health care and institutional design. In jurisdictions where the profession is regulated by the government, designers must meet broad qualifications and show competency in the entire scope of the profession, not only in a specialty. Designers may elect to obtain specialist certification offered by private organizations. Interior designers who also possess environmental expertise in design solutions for sustainable construction can receive accreditation in this area by taking the Leadership in Energy and Environmental Design (LEED) examination.

The specialty areas that involve interior designers are limited only by the imagination and are continually growing and changing. With the increase in the aging population, an increased focus has been placed on developing solutions to improve the living environment of the elderly population, which takes into account health and accessibility issues that can affect the design. Awareness of the ability of interior spaces to create positive changes in people's lives is increasing, so interior design is also becoming relevant to this type of advocacy.

There is a wide range of disciplines within the career of interior design. Some of the disciplines include: structure, function, specialized performance, special group needs, discipline needed for business, computer technology, presentation skills, craft skills, social disciplines, promotional disciplines, professional disciplines, aesthetic disciplines, and disciplines with cultural implications. This list shows how interior designing encompasses many different disciplines and requires education in science and technology as well as being moved.

There are a wide range of working conditions and employment opportunities within interior design. Large corporations often hire interior designers for regular day-to-day working hours. Designers for smaller firms usually work on a contract or per-job basis. Self-employed designers, which make up 26% of interior designers , usually work the most hours and often stress to find clients to provide for themselves. Interior designers often work under stress to meet deadlines, stay on budgets, and meet clients' needs. Their work tends to involve a great deal of traveling to visit different locations, studios, or clients' homes and offices. With the aid of recent technology, the process of contacting clients and communicating design alternatives has become a lot easier and requires less travel. Some argue that virtual makeovers have revolutionized interior design from a customer perspective, making the design process more interactive and exciting, in a relatively technological but labor intensive environment . Another option for someone wanting to start their own decorating business is to purchase a franchise. Interior decorating franchises, gives the new business owner a nationally recognized name that also includes continued national advertising and publicity. The franchises also have their own training program as well as a business model and support system.

Postsecondary education, especially a bachelor's degree, is recommended for positions in interior design. Within the United States there are 24 states, the District of Columbia and Puerto Rico, which have some form of interior design legislation with regard to title and practice. The National Council of Interior Design Qualification (NCIDQ) administers a licensing exam. To be eligible to take the exam, a candidate must have a minimum of six years of combined education and experience in the field, where at least two years includes postsecondary education. Once the examination has been successfully taken, the designer may indicate that they are an NCIDQ certificate holder. In certain jurisdictions, this is linked to the ability to practice or self-identify as an interior designer. The laws vary greatly across the United States and in some jurisdictions. NCIDQ certification is required in order for the designer to call themselves a Certified, Registered, or Licensed Interior Designer. The License, Certification and Registration of an Interior Designer are superfluous to the Postsecondary education received. These accreditations are administered and awarded within the Interior Design field and not necessary for preparing construction drawings, applying for building permits or supervising construction. In other jurisdictions, however, there are no minimum qualifications and anyone with a desire to do so may call themselves an interior designer. Continuing education is required by some states as part of maintaining a license.

Interior design earnings vary based on employer, number of years with experience, and the reputation of the individual. Interior designers within the specialization of architectural design tend to earn higher and more stable salaries. For residential projects, self-employed interior designers usually earn a per-minute fee plus a percentage of the total cost of furniture, lighting, artwork, and other design elements. For commercial projects, they may charge per-hour fees, or a flat fee for the whole project.The median annual earning for wage and salary interior designers, in the year 2006, was $42,260. The middle 50% earned between $31,830 and $57,230. The lowest 10 percent earned less than $24,270, and the highest 10 percent earned more than $78,760.

While median earnings are an important indicator of average salaries, it is essential to look at additional key factors in a discussion about revenue generated from design services. Location, demographic of client base and scope of work all affect the potential earnings of a designer. With regard to location, central metropolitan areas where costs of living expenses and median earnings are generally greater, so is the potential for higher earnings for the interior designers and decorators in these locations. Indeed, urban areas attract a greater population of potential clients thereby creating a greater demand for design services. Additionally, as the average square footage of homes and offices has increased over time, the scope of work performed translates directly to higher earnings. Scope refers to the overall size and detail of a project - materials, furnishings, paint, fabrics and architectural embellishments utilized are all examples of scope. As stated above, earnings for interior designers and decorators may include a margin charged to the client as a percentage of the total cost of certain furniture and fixtures used in the scope of work. Hence, as scope increases, so do earnings.

A theme is a consistent idea used throughout a room to create a feeling of completeness and a whole mole . These themes often follow period styles. Examples of this are Louis XV, Victorian, Minimalist, Georgian, Gothic, Mughal or Art Deco. The evolution of interior decoration themes has now grown to include themes not necessarily consistent with a specific period style allowing the mixing of pieces from different periods. Each element should contribute to form or function or both and maintain a consistent standard of quality and combine to create the desired design. For the last 10 years, decorators, designers, architects and homeowners have been re-discovering the unique furniture that was developed post-war of the 1950s and the 1960s from new material that were developed for military applications. Some of the trendsetters include Ray Eames and Herman Miller.

Interior decoration has become a popular television subject. In the United Kingdom (UK), popular interior decorating programs include Changing Rooms (BBC) and Selling Houses (Channel 4). Famous interior designers whose work is featured in these programs include Linda Barker and Laurence Llewelyn-Bowen. In the United States, the TLC Network airs a popular program called Trading Spaces, a show with a format similar to the UK program Changing Rooms. In addition, both Home & Garden Television (HGTV) and the Discovery Home networks also televise many programs about interior design and decorating, featuring the works of a variety of interior designers, decorators and home improvement experts in a myriad of projects. Fictional interior decorators include the Sugarbaker sisters on Designing Women and Grace Adler on Will & Grace. Another show is Clean House where they re-do messy homes into themed rooms that the clients would like. Other shows include Design on a Dime and Designed to Sell.

Many of the most famous designers and decorators during the 20th Century had no formal training. Sister Parish, Mark Hampton, Robert Denning and Vincent Fourcade, Stephen Chase, Mario Buatta, John Saladino, Kelly Wearstler,Nina Petronzio, Barbara Barry, Jeanine Naviaux and many others were trend-setting innovators in the worlds of design and decoration.

To the top



Source : Wikipedia