3.3777024162637 (2359)
Posted by pompos 03/01/2009 @ 18:04

Tags : components, hardware, technology

News headlines
EDAC Technologies To Buy Aero Engine Component Maker - Trading Markets (press release)
has signed a contract to purchase the manufacturing unit assets of MTU Aero Engines North America, a maker of components for the aerospace industry. Newington, Connecticut-based AENA primarily manufactures rotating components, such as disks, rings,...
Seeking Additional Growth with Innovative Product Designs, Lippert ... - PR Newswire (press release)
This innovative design not only contributes to the safety of coupling a trailer to a tow vehicle, but it has an alignment system integrated into the coupler," said Brian Donat , Director of Marketing for Lippert Components. Donat said the QuickBite...
Indie Ranch Media Subsidiary NetMix Broadcasting Announces ... - SYS-CON Media (press release)
MALIBU, CA -- (Marketwire) -- 05/18/09 -- Indie Ranch Media, Inc. (the "Company") (PINKSHEETS: INDR) announced that their wholly owned subsidiary NetMix Broadcasting Network has completed the technical requirements for broadcasting their top stations...
Builder confidence hits 8-month high -
Scores for each component are then used to calculate a seasonally adjusted index where any number over 50 indicates that more builders view conditions as good. Two out of three components of the index rose in May. The index component gauging current...
Automotive, industrial drag down German component market - EE Times Deutschland
While revenues in the German industry in 2008 were caused predominantly by soft demand in the semiconductor sector, the decline in 2009 will affect more or less all segments relevant for the electronic component market, says ZVEI....
BOM Products: Components, Circuits, and Associated Services - SMT Magazine
Following are the bill of materials (BOM) products recently released by component manufacturers, PCB suppliers, connector companies, and other industry suppliers, including Wall Industries, Vishay, Hirose, Premo, Aimtec, and Screaming Circuits....
Certusoft Configurator to Integrate 3 D-Cubed Components - Ten Links
D-Cubed 3D DCM is the industry's preferred component for implementing assembly modeling functionality in a wide range of design applications. The D-Cubed Collision Detection Manager (CDM) will be integrated by Certusoft into its Configurator...
Notebook component shortage likely to get worse in 2H09, says Inventec - DigiTimes
The supply of key notebook components may continue to worsen in the second half of 2009 leading to increasing costs, according to Alexander Hsu, president of Inventec's finance division. Components currently facing a shortage include DRAM modules,...
Microsoft tries to patent a working 'Magic Wand' for Xbox 360 - BetaNews
The application says, "The architecture can include a variety of I/O components such as keys/keypad, navigation buttons, lights, switches, displays, speakers, microphones, transmitters/receives, or substantially any other suitable component found in or...
TransDigm to Host Analyst Day at its Champion Aerospace Facility - PR Newswire (press release)
TransDigm Group, through its wholly-owned subsidiaries, is a leading global designer, producer and supplier of highly engineered aircraft components for use on nearly all commercial and military aircraft in service today. Major product offerings...

Components (album)

Components cover

Components is an album by jazz vibraphonist Bobby Hutcherson, released on the Blue Note label in 1965. The first side of the LP features compositions by Hutcherson, in a hard bop style, whilst the second side features Joe Chambers' compositions, more in the avant-garde style.

To the top

Principal components analysis

Principal component analysis (PCA) involves a mathematical procedure that transforms a number of possibly correlated variables into a smaller number of uncorrelated variables called principal components. The first principal component accounts for as much of the variability in the data as possible, and each succeeding component accounts for as much of the remaining variability as possible. Depending on the field of application, it is also named the discrete Karhunen-Loève transform (KLT), the Hotelling transform or proper orthogonal decomposition (POD).

PCA was invented in 1901 by Karl Pearson. Now it is mostly used as a tool in exploratory data analysis and for making predictive models. PCA involves the calculation of the eigenvalue decomposition of a data covariance matrix or singular value decomposition of a data matrix, usually after mean centering the data for each attribute. The results of a PCA are usually discussed in terms of component scores and loadings (Shaw, 2003).

PCA is the simplest of the true eigenvector-based multivariate analyses. Often, its operation can be thought of as revealing the internal structure of the data in a way which best explains the variance in the data. If a multivariate dataset is visualised as a set of coordinates in a high-dimensional data space (1 axis per variable), PCA supplies the user with a lower-dimensional picture, a "shadow" of this object when viewed from its (in some sense) most informative viewpoint.

PCA is closely related to factor analysis; indeed, some statistical packages deliberately conflate the two techniques. True factor analysis makes different assumptions about the underlying structure and solves eigenvectors of a slightly different matrix.

PCA is mathematically defined as an orthogonal linear transformation that transforms the data to a new coordinate system such that the greatest variance by any projection of the data comes to lie on the first coordinate (called the first principal component), the second greatest variance on the second coordinate, and so on. PCA is theoretically the optimum transform for a given data in least square terms.

PCA can be used for dimensionality reduction in a data set by retaining those characteristics of the data set that contribute most to its variance, by keeping lower-order principal components and ignoring higher-order ones. Such low-order components often contain the "most important" aspects of the data. However, depending on the application this may not always be the case.

PCA has the distinction of being the optimal linear transformation for keeping the subspace that has largest variance. This advantage, however, comes at the price of greater computational requirement if compared, for example, to the discrete cosine transform.

The eigenvectors with the largest eigenvalues correspond to the dimensions that have the strongest correlation in the data set (see Rayleigh quotient).

PCA is equivalent to empirical orthogonal functions (EOF).

An autoencoder neural network with a linear hidden layer is similar to PCA. Upon convergence, the weight vectors of the K neurons in the hidden layer will form a basis for the space spanned by the first K principal components. Unlike PCA, this technique will not necessarily produce orthogonal vectors.

PCA is a popular technique in pattern recognition. But it is not optimized for class separability. An alternative is the linear discriminant analysis, which does take this into account. PCA optimally minimizes reconstruction error under the L2 norm.

PCA is theoretically the optimal linear scheme, in terms of least mean square error, for compressing a set of high dimensional vectors into a set of lower dimensional vectors and then reconstructing the original set. It is a non-parametric analysis and the answer is unique and independent of any hypothesis about data probability distribution. However, the latter two properties are regarded as weakness as well as strength, in that being non-parametric, no prior knowledge can be incorporated and that PCA compressions often incur loss of information.

We assumed the observed data set to be linear combinations of certain basis. Non-linear methods such as kernel PCA have been developed without assuming linearity.

PCA uses the eigenvectors of the covariance matrix and it only finds the independent axes of the data under the Gaussian assumption. For non-Gaussian or multi-modal Gaussian data, PCA simply de-correlates the axes. When PCA is used for clustering, its main limitation is that it does not account for class separability since it makes no use of the class label of the feature vector. There is no guarantee that the directions of maximum variance will contain good features for discrimination.

PCA simply performs a coordinate rotation that aligns the transformed axes with the directions of maximum variance. It is only when we believe that the observed data has a high signal-to-noise ratio that the principal components with larger variance correspond to interesting dynamics and lower ones correspond to noise.

Essentially, PCA involves only rotation and scaling. The above assumptions are made in order to simplify the algebraic computation on the data set. Some other methods have been developed without one or more of these assumptions; these are described below.

Suppose you have data comprising a set of observations of M variables, and you want to reduce the data so that each observation can be described with only L variables, L < M. Suppose further, that the data are arranged as a set of N data vectors with each representing a single grouped observation of the M variables.

Notice that in , Pi is an eigenvector of the covariance matrix of X. Therefore, by finding the eigenvectors of the covariance matrix of X, we find a projection matrix P that satisfies the original constraints.

It has been shown recently (2007) that the relaxed solution of K-means clustering, specified by the cluster indicators, is given by the PCA principal components, and the PCA subspace spanned by the principal directions is identical to the cluster centroid subspace specified by the between-class scatter matrix. Thus PCA automatically projects to the subspace where the global solution of K-means clustering lie, and thus facilitate K-means clustering to find near-optimal solutions.

Correspondence analysis (CA) was developed by Jean-Paul Benzécri and is conceptually similar to PCA, but scales the data (which must be positive) so that rows and columns are treated equivalently. It is traditionally applied to contingency tables. CA decomposes the Chi-square statistic associated to this table into orthogonal factors. Because CA is a descriptive technique, it can be applied to tables for which the Chi-square statistic is appropriate or not. Several variants of CA are available including Detrended Correspondence Analysis and Canonical Correspondence Analysis.

Most of the modern methods for nonlinear dimensionality reduction find their theoretical and algorithmic roots in PCA or K-means. The original Pearson's idea was to take a straight line (or plane) which will be "the best fit" to a set of data points. Principal curves and manifolds give the natural geometric framework for PCA generalization and extend the geometric interpretation of PCA by explicitly constructing an embedded manifold for data approximation, and by encoding using standard geometric projection onto the manifold. See principal geodesic analysis.

N-way principal component analysis may be performed with models like PARAFAC and Tucker decomposition.

To the top

Component-based software engineering

Software component representations: above the representation used in UML, below the representation commonly used by Microsoft's COM objects. The "lollipops" sticking out from the components are their interfaces. Note the characteristic IUnknown interface of the COM component.

Component-based software engineering (CBSE) (also known as Component-Based Development (CBD) or Software Componentry) is a branch of the software engineering discipline, with emphasis on decomposition of the engineered systems into functional or logical components with well-defined interfaces used for communication across the components.

Components are considered to be a higher level of abstraction than objects and as such they do not share state and communicate by exchanging messages carrying data.

A simpler definition can be: A component is an object written to a specification. It does not matter what the specification is: COM, Enterprise JavaBeans, etc., as long as the object adheres to the specification. It is only by adhering to the specification that the object becomes a component and gains features such as reusability.

Software components often take the form of objects or collections of objects (from object-oriented programming), in some binary or textual form, adhering to some interface description language (IDL) so that the component may exist autonomously from other components in a computer.

When a component is to be accessed or shared across execution contexts or network links, techniques such as serialization or marshalling are often employed to deliver the component to its destination.

Reusability is an important characteristic of a high quality software component. A software component should be designed and implemented so that it can be reused in many different programs.

In the 1960s, scientific subroutine libraries were built that were reusable in a broad array of engineering and scientific applications. Though these subroutine libraries reused well-defined algorithms in an effective manner, they had a limited domain of application. Today, modern reusable components encapsulate both data structures and the algorithms that are applied to the data structures.

It builds on prior theories of software objects, software architectures, software frameworks and software design patterns, and the extensive theory of object-oriented programming and the object oriented design of all these. It claims that software components, like the idea of hardware components, used for example in telecommunications, can ultimately be made interchangeable and reliable.

The idea that software should be componentized, built from prefabricated components, was first published in Douglas McIlroy's address at the NATO conference on software engineering in Garmisch, Germany, 1968 titled Mass Produced Software Components. This conference set out to counter the so-called software crisis. His subsequent inclusion of pipes and filters into the Unix operating system was the first implementation of an infrastructure for this idea.

IBM led the path with their System Object Model (SOM) in the early 1990s. Some claim that Microsoft paved the way for actual deployment of component software with OLE and COM. Today, many successful software component models exist.

The idea in object-oriented programming (OOP) is that software should be written according to a mental model of the actual or imagined objects it represents. OOP and the related disciplines of object-oriented design and object-oriented analysis focus on modeling real-world interactions and attempting to create 'verbs' and 'nouns' which can be used in intuitive ways, ideally by end users as well as by programmers coding for those end users.

Component-based software engineering, by contrast, makes no such assumptions, and instead states that software should be developed by gluing prefabricated components together much like in the field of electronics or mechanics. Some peers will even talk of modularizing systems as software components as a new programming paradigm.

Some argue that this distinction was made by earlier computer scientists, with Donald Knuth's theory of "literate programming" optimistically assuming there was convergence between intuitive and formal models, and Edsger Dijkstra's theory in the article The Cruelty of Really Teaching Computer Science, which stated that programming was simply, and only, a branch of mathematics.

In both forms, this notion has led to many academic debates about the pros and cons of the two approaches and possible strategies for uniting the two. Some consider them not really competitors, but only descriptions of the same problem from two different points of view.

A computer running several software components is often called an application server. Using this combination of application servers and software components is usually called distributed computing. The usual real-world application of this is in financial applications or business software.

To the top

Component video

Three cables, each with RCA plugs at both ends, are often used to carry analog component video

Component video is a video signal that has been split into two or more components. In popular use, it refers to a type of analog video information that is transmitted or stored as three separate signals. Component video can be contrasted with composite video (NTSC, PAL or SECAM) in which all the video information is combined into a single line-level signal. Like composite, component video cables do not carry audio and are often paired with audio cables.

When used without any other qualifications the term component video generally refers to analog YPbPr component video with sync on luma.

Reproducing a video signal on a display device (for example, a CRT) is a straightforward process complicated by the multitude of signal sources. DVD, VHS, computers and video game consoles all store, process and transmit video signals using different methods, and often each will provide more than one signal option. One way of maintaining signal clarity is by separating the components of a video signal so that they do not interfere with each other. When a signal is separated this way it is called 'component video'. S-Video, RGB and YPbPr signals comprise two or more separate signals: hence, all are 'component video' signals. For most consumer-level applications, analog component video is used. Digital component video is slowly becoming popular in both computer and home-theatre applications. Component video is capable of carrying signals such as 480i, 480p, 576i, 576p, 720p, 1080i and 1080p, and new high definition TVs support the use of component video up to their native resolution.

The various RGB (red, green, blue) analog component video standards (e.g., RGBS, RGBHV, RG&SB) use no compression and impose no real limit on color depth or resolution, but require large bandwidth to carry the signal and contain much redundant data since each channel typically includes the same black and white image. Most modern computers offer this signal via the VGA port. Many televisions, especially in Europe, utilize RGB via the SCART connector. All arcade games, excepting early vector and black and white games, use RGB monitors.

Analog RGB is slowly falling out of favor as computers obtain better clarity using digital (DVI) video and home theater moves towards HDMI. Analog RGB has been largely ignored, despite its quality and suitability, as it cannot easily be made to support digital rights management. RGB was never popular in North America for consumer electronics, although it was used extensively in commercial, professional and high-end installations, as S-Video was considered sufficient for consumer use.

Composite sync is common in the European SCART connection scheme (using pin 17 and 19 or 20 ). Sometimes a full composite video signal may also serve as the sync signal, though often computer monitors will be unable to handle the extra video data. A full composite sync video signal requires four wires – red, green, blue, sync. If separate cables are used, the sync cable is usually colored white (or yellow, as is the standard for composite video).

Separate sync is most common with VGA, used worldwide for analog computer monitors. This is sometimes known as RGBHV, as the horizontal and vertical synchronization pulses are sent in separate channels. This mode requires five conductors. If separate cables are used, the sync lines are usually yellow (H) and white (V), or yellow (H) and black (V), or gray (H) and black (V).

Sync on Green (SoG) is the least common, and while some VGA monitors support it, most do not. Sony is a big proponent of SoG, and most of their monitors (and their PlayStation 2 video game console) use it. Like devices that use composite video or S-video, SoG devices require additional circuitry to remove the sync signal from the green line. A monitor that is not equipped to handle SoG will display an image with an extreme green tint, if any image at all, when given a SoG input.

Further types of component analog video signals do not use R, G, and B components but rather a colorless component, termed luma, combined with one or more color-carrying components, termed chroma, that give only color information. This overcomes the problem of data redundancy that plagues RGB signals, since there is only one monochromatic image carried, instead of three. Both the S-Video component video output (two separate signals) and the YPbPr component video output (three separate signals) seen on DVD players are examples of this method.

Converting video into luma and chroma allows for chroma subsampling, a method used by JPEG images and DVD players to reduce the storage requirements for images and video. The YPbPr scheme is usually what is meant when people talk of component video today. Many consumer DVD players, high-definition displays, video projectors and the like, use this form of color coding.

These connections are commonly and mistakenly labeled with terms like "YUV", "Y/R-Y/B-Y" and Y, B-Y, R-Y. This is inaccurate since YUV, YPbPr, and Y B-Y R-Y differ in their scale factors.

When used for connecting a video source to a video display where both support 4:3 and 16:9 display formats, the PAL television standard provides for signaling pulses that will automatically switch the display from one format to the other. However, Y'PbPr does not support this operation.

S-Video (S for separated) is sometimes considered a type of component video signal (transferring YUV when used for PAL video and YIQ when used for NTSC video), because the luma (Y) and chroma (UV or IQ) signals are transmitted on separate wires. However, it is also the poorest quality-wise, being far surpassed by the more complex component video schemes (like RGB). S-Video is not being used for high definition standards because the carrier frequency of the color signal modulation would have to be adjusted.

A possible source of confusion is that the word component differs from composite (an older, more widely-known video format) by just a few letters.

Component video connectors are not unique in that the same connectors are used for several different standards; hence, making a component video connection often does not lead to a satisfactory video signal being transferred. The settings on many DVD players and TVs may need to be set to indicate the type of input/output being used, and if set wrong the image may not be properly displayed. Progressive scan, for example, is often not enabled by default, even when component video output is selected.

Modern game systems (such as the PlayStation 3, Xbox 360, and Wii) use the same connector pins for both YPbPr and composite video, with a software or hardware switch to determine which signal is generated. Hence, a common complaint, especially with the PlayStation 3, is that the component video signals are very green, with very dark reds and blues. This is simply because the system menu has not been changed from AV (composite) to RGB (component).

To the top

Electrical element

Electrical components.

The concept of electrical elements is used in the analysis of electrical networks. Any electrical network can be modeled by decomposing it down to multiple, interconnected electrical elements in a schematic diagram or circuit diagram. Each electrical element affects the voltage in the network or current through the network in a particular way. By analyzing the way a network is affected by its individual elements, it is possible to estimate how a real network will behave on a macro scale.

There is a distinction between real, physical electrical or electronic components and the ideal electrical elements by which they are represented.

Circuit analysis using electric elements is useful for understanding many practical electrical networks using components.

The fourth element, the memristor, was theorized by Leon Chua in a 1971 paper, but a physical component demonstrating memristance was not created until thirty-seven years later. It was reported on April 30, 2008, that a working memristor had been developed by a team at HP Labs led by scientist R. Stanley Williams. With the advent of the memristor, each pairing of the four variables can now be related. Memristors are able to store one bit of non-volatile memory. They may see application in programmable logic, signal processing, neural networks, and control systems, among other fields. Because memristors are time-variant by definition, they are not included in linear time-invariant (LTI) circuit models.

The following are examples of representation of components by way of electrical elements.

To the top

Passivity (engineering)

Television signal splitter consisting of a passive hi-pass filter (left) and a passive low-pass filter (right). The antenna is connected to the screw terminals to the left of center.

Passivity is a property of engineering systems, most commonly used in electronic engineering and control systems. A passive component, depending on field, may either refer to a component that consumes (but does not produce) energy, or to a component that is incapable of power gain. A component that is not passive is called an active component. An electronic circuit consisting entirely of passive components is called a passive circuit (and has the same properties as a passive component).

In control systems and circuit network theory, a passive component or circuit is one that consumes energy and can not control flow of electron, but does not produce energy. Under this methodology, voltage and current sources are considered active, while resistors, tunnel diodes, glow tubes, capacitors, and other dissipative and energy-neutral components are considered passive. For memoryless two-terminal elements, this means that the current-voltage characteristics lie in the first and third quadrant. Circuit designers will sometimes refer to this class of components as dissipative, or thermodynamically passive.

Where the notation indicates that the supremum is taken over all T > 0 and all admissible pairs . A system is considered passive if EA is finite for all initial states x. Otherwise, the system is considered active.

In circuit design, informally, passive components refer to ones that are not capable of power gain. Under this definition, passive components include capacitors, inductors, resistors, transformers, voltage sources, and current sources. They exclude devices like transistors, relays, glow tubes, tunnel diodes, and similar devices. Formally, for a memoryless two-terminal element, this means that the current-voltage characteristic is monotonically increasing. For this reason, control systems and circuit network theorists refer to these devices as locally passive, incrementally passive, increasing, monotone increasing, or monotonic. It is not clear how this definition would be formalized to multiport devices with memory -- as a practical matter, circuit designers use this term informally, so it may not be necessary to formalize it.

Passivity, in most cases, can be used to demonstrate that passive circuits will be stable under specific criteria. Note that this only works if only one of the above definitions of passivity is used -- if components from the two are mixed, the systems will, in general, not be stable under any criteria. In addition, passive circuits will not necessarily be stable under all stability criteria. For instance, a resonant series LC circuit will have unbounded voltage output for a bounded voltage input, but will be stable in the sense of Lyapunov, and given bounded energy input will have bounded energy output.

Passivity is frequently used in control systems to design stable control systems or to show stability in control systems. Passivity is also used in some areas of circuit design, especially filter design.

A passive filter is a kind of electronic filter that is made only from passive elements -- in contrast to an active filter, it does not require an external power source (beyond the signal). Since most filters are linear, in most cases, passive filters are composed of just the four basic linear elements -- resistors, capacitors, inductors, and transformers. More complex passive filters may involve nonlinear elements, or more complex linear elements, such as transmission lines.

They are commonly used in speaker crossover design (due to the moderately large voltages and currents, and the lack of easy access to power), filters in power distribution networks (due to the large voltages and currents), power supply bypassing (due to low cost, and in some cases, power requirements), as well as a variety of discrete and home brew circuits (for low-cost and simplicity). Passive filters are less common in integrated circuit design, where active devices are comparatively inexpensive compared to resistors and capacitors, and inductors are prohibitively expensive.

To the top

Locally connected space

In this topological space, V is a neighbourhood of p and it contains a connected neighbourhood (the dark green disk) that contains p.

In topology and other branches of mathematics, a topological space X is locally connected if every point admits a neighbourhood basis consisting entirely of open, connected sets.

Throughout the history of topology, connectedness and compactness have been two of the most widely studied topological properties. Indeed, the study of these properties even among subsets of Euclidean space, and the recognition of their independence from the particular form of the Euclidean metric, played a large role in clarifying the notion of a topological property and thus a topological space. However, whereas the structure of compact subsets of Euclidean space was understood quite early on via the Heine-Borel Theorem, connected subets of (for n > 1) proved to be much more complicated. Indeed, while any compact Hausdorff space is locally compact, a connected space – and even a connected subset of the Euclidean plane – need not be locally connected (see below).

This led to a rich vein of research in the first half of the twentieth century, in which topologists studied the implications between increasingly subtle and complex variations on the notion of a locally connected space. As an example, we will consider here the notion of weak local connectedness at a point and its relation to local connectedness.

In the latter part of the twentieth century, research trends shifted to more intense study of spaces like manifolds which are locally well understood (being locally homeomorphic to Euclidean space) but have complicated global behavior. From this modern perspective, the stronger property of local path connectedness turns out to be more important: for instance, in order for a space to admit a universal cover it must be connected and locally path connected. We will discuss local path connectedness as well.

A space is locally connected if and only if for every open set U, the connected components of U (in the subspace topology) are open. It follows, for instance, that a continuous function from a locally connected space to a totally disconnected space must be locally constant. In fact the openness of components is so natural that one must be sure to keep in mind that it is not true in general: for instance Cantor space is totally disconnected but not discrete.

Let X be a topological space, and let x be a point of X.

We say that X is locally connected at x if for every open set V containing x there exists a connected, open set U with . The space X is said to be locally connected if it is locally connected at x for all x in X.

By contrast, we say that X is weakly locally connected at x (or connected im kleinen at x) if for every open set V containing x there exists a connected subset N of V such that x lies in the interior of N. An equivalent definition is: each open set V containing x contains an open neighborhood U of x such that any two points in U lie in some connected subset of V. The space X is said to be weakly locally connected if it is locally connected at x for all x in X.

In other words, the only difference between the two definintions is that for local connectedness at x we require a neighborhood base of open connected sets, whereas for weak local connectedness at x we require only a base of neighborhoods of x.

Evidently a space which is locally connected at x is weakly locally connected at x. The converse does not hold (a counterexample, the broom space, is given below). On the other hand, it is equally clear that a locally connected space is weakly locally connected, and here it turns out that the converse does hold: a space which is weakly locally connected at all of its points is necessarily locally connected at all of its points. A proof is given below.

We say that X is locally path connected at x if for every open set V containing x there exists a path connected, open set U with . The space X is said to be locally path connected if it is locally path connected at x for all x in X.

Since path connected spaces are connected, locally path connected spaces are locally connected. This time the converse does not hold (see example 7 in the next section).

1. For any positive integer n, the Euclidean space is connected and locally connected.

2. The subspace of the real line is locally connected but not connected.

3. The topologist's sine curve is a subspace of the Euclidean plane which is connected, but not locally connected.

4. The space of rational numbers endowed with the standard Euclidean topology, is neither connected nor locally connected.

5. The comb space is path connected but not locally path connected.

6. Let X be a countably infinite set endowed with the cofinite topology. Then X is locally connected (indeed, hyperconnected) but not locally path connected.

Further examples are given later on in the article.

2. A space is locally connected if and only if it admits a base of connected subsets.

3. The disjoint union of a family {Xi} of spaces is locally connected if and only if each Xi is locally connected. In particular, since a single point is certainly locally connected, it follows that any discrete space is locally connected. On the other hand, a discrete space is totally disconnected, so is connected only if it has at most one point.

4. Conversely, a totally disconnected space is locally connected if and only if it is discrete. This can be used to explain the aforementioned fact that the rational numbers are not locally connected.

Lemma: Let X be a space, and {Yi} a family of subsets of X. Suppose that is nonempty. Then, if each Yi is connected (respectively, path connected) then the union is connected (respectively, path connected).

Evidently both relations are reflexive and symmetric. Moreover, if x and y are contained in a connected (respectively, path connected) subset A and y and z are connected in a connected (respectively, path connected) subset B, then the Lemma implies that is a connected (respectively, path connected) subset containing x, y and z. Thus each relation is an equivalence relation, and defines a partition of X into equivalence classes. We consider these two partitions in turn.

For x in X, the set Cx of all points y such that is called the connected component of x. The Lemma implies that Cx is the unique maximal connected subset of X containing x. Since the closure of Cx is also a connected subset containing x, it follows that Cx is closed.

If X has only finitely many connected components, then each component is the complement of a finite union of closed sets and therefore open. In general, the connected components need not be open, since, e.g., there exist totally disconnected spaces (i.e., Cx = {x} for all points x) which are not discrete, like Cantor space. However, the connected components of a locally connected space are also open, and thus are clopen sets. It follows that a locally connected space X is a topological disjoint union of its distinct connected components. Conversely, if for every open subset U of X, the connected components of U are open, then X admits a base of connected sets and is therefore locally connected.

Similarly x in X, the set PCx of all points y such that is called the path component of x. As above, PCx is also the union of all path connected subsets of X which contain x, so by the Lemma is itself path connected. Because path connected sets are connected, we have for all x in X.

However the closure of a path connected set need not be path connected: for instance, the topologist's sine curve is the closure of the open subset U consisting of all points (x,y) with x > 0, and U, being homeomorphic to an interval on the real line, is certainly path connected. Moreover, the path components of the topologist's sine curve C are U, which is open but not closed, and , which is closed but not open.

A space is locally path connected if and only for all open subsets U, the path components of U are open. Therefore the path components of a locally path connected space give a partition of X into pairwise disjoint open sets. It follows that an open connected subspace of a locally path connected space is necessarily path connected. Moreover, if a space is locally path connected, then it is also locally connected, so for all x in X, Cx is connected and locally path connected, hence path connected, i.e., Cx = PCx. That is, for a locally path connected space the components and path components coincide.

1. The set I × I (where I = ) in the dictionary order topology has exactly one component (because it is connected) but has uncountably many path components. Indeed, any set of the form {a} × I is a path component for each a belonging to I.

2. Let f be a continuous map from R to Rℓ (R in the lower limit topology). Since R is connected, and the image of a connected space under a continuous map must be connected, the image of R under f must be connected. Therefore, the image of R under f must be a subset of a component of Rℓ. Since this image is nonempty, the only continuous maps from R to Rℓ, are the constant maps. In fact, any continuous map from a connected space to a totally disconnected space must be constant.

Let X be a topological space. We define a third relation on X: if there is no separation of X into sets A and B such that x is an element of A and y is an element of B. This is an equivalence relation on X and the equivalence class QCx containing x is called the quasicomponent of x.

QCx can also be characterized as the intersection of all clopen subsets of X which contain x. Accordingly QCx is closed; in general it need not be open.

PCx = Cx = QCx.

1. An example of a space whose quasicomponents are not equal to its components is a countable set, X, with the discrete topology along with two points a and b such that any neighbourhood of a either contains b or all but finitely many points of X, and any neighbourhood of b either contains a or all but finitely many points of X. The point a lies in the same quasicomponent of b but not in the same component as b.

2. The Arens-Fort space is not locally connected, but nevertheless the components and the quasi-components coincide: indeed QCx = Cx = {x} for all points x.

Let X be a weakly locally connected space. Then X is locally connected.

It is sufficient to show that the components of open sets is open. Let U be open in X and let C be a component of U. Let x be an element of C. Then x is an element of U so that there is a connected subspace A of X contained in U and containing a neighbourhood V of x. Since A is connected and A contains x, A must be a subset of C (the component containing x). Therefore, the neighbourhood V of x is a subset of C. Since x was arbitrary, we have shown that each x in C has a neighbourhood V contained in C. This shows that C is open relative to U. Therefore, X is locally connected.

A certain infinite union of decreasing broom spaces is an example of a space which is weakly locally connected at a particular point, but not locally connected at that point.

To the top

Source : Wikipedia