Electronic commerce, commonly known as e-commerce, eCommerce or e-comm, refers to the buying and selling of products or services over electronic systems such as the Internet and other computer networks.
However, the term may refer to more than just buying and selling products online. It also includes the entire online process of developing, marketing, selling, delivering, servicing and paying for products and services. The amount of trade conducted electronically has grown extraordinarily with widespread Internet usage.
The use of commerce is conducted in this way, spurring and drawing on innovations in electronic funds transfer(EFT), supply chain management, Internet marketing, online transaction processing, electronic data interchange (EDI), inventory management systems, and automated data collection systems.
Modern electronic commerce typically uses the World Wide Web at least at one point in the transaction's life-cycle, although it may encompass a wider range of technologies such as e-mail, mobile devices and telephones as well.
A large percentage of electronic commerce is conducted entirely in electronic form for virtual items such as access to premium content on a website, but mostly electronic commerce involves the transportation of physical items in some way. Online retailers are sometimes known as e-tailers and online retail is sometimes known as e-tail. Almost all big retailers are now electronically present on the World Wide Web.
Electronic commerce that takes place between businesses is referred to as business-to-business or B2B. B2B can be open to all interested parties (e.g. commodity exchange) or limited to specific, pre-qualified participants (private electronic market).
Electronic commerce that takes place between businesses and consumers, on the other hand, is referred to as business-to-consumer or B2C. This is the type of electronic commerce conducted by companies such as Amazon.com.
Online shopping is a form of electronic commerce where the buyer is directly online to the seller's computer usually via the internet.
There is no intermediary service involved. The sale or purchase transaction is completed electronically and interactively in real-time such as in Amazon.com for new books. However in some cases, an intermediary may be present in a sale or purchase transaction such as the transactions on eBay.com.
Electronic commerce is generally considered to be the sales aspect of e-business. It also consists of the exchange of data to facilitate the financing and payment aspects of business transactions.
Originally, electronic commerce was identified as the facilitation of commercial transactions electronically, using technology such as Electronic Data Interchange (EDI) and Electronic Funds Transfer (EFT). These were both introduced in the late 1970s, allowing businesses to send commercial documents like purchase orders or invoices electronically.
The growth and acceptance of credit cards, automated teller machines (ATM) and telephone banking in the 1980s were also forms of electronic commerce. Another form of e-commerce was the airline reservation system typified by Sabre in the USA and Travicom in the UK.
From the 1990s onwards, electronic commerce would additionally include enterprise resource planning systems (ERP), data mining and data warehousing.
In 1990, Tim Berners-Lee invented the WorldWideWeb web browser and transformed an academic telecommunication network into a worldwide everyman everyday communication system called internet/www. Commercial enterprise on the Internet was strictly prohibited by NSF until 1995. Although the Internet became popular worldwide around 1994 with the adoption of Mosaic web browser, it took about five years to introduce security protocols and DSL allowing continual connection to the Internet.
By the end of 2000, many European and American business companies offered their services through the World Wide Web. Since then people began to associate a word "ecommerce" with the ability of purchasing various goods through the Internet using secure protocols and electronic payment services.
Some common applications related to electronic commerce are the following:
- Document automation in supply chain and logistics
- Domestic and international payment systems
- Enterprise content management
- Group buying
- Automated online assistants
- Instant messaging
- Online shopping and order tracking
- Online banking
- Online office suites
- Shopping cart software
- Electronic tickets
In the United States, some electronic commerce activities are regulated by the Federal Trade Commission (FTC).
These activities include the use of commercial e-mails, online advertising and consumer privacy. The CAN-SPAM Act of 2003 establishes national standards for direct marketing over e-mail.
The Federal Trade Commission Act regulates all forms of advertising, including online advertising, and states that advertising must be truthful and non-deceptive.
The Ryan Haight Online Pharmacy Consumer Protection Act of 2008, which came into law in 2008, amends the Controlled Substances Act to address online pharmacies.
Contemporary electronic commerce involves everything from ordering "digital" content for immediate online consumption, to ordering conventional goods and services, to "meta" services to facilitate other types of electronic commerce.
On the institutional level, big corporations and financial institutions use the internet to exchange financial data to facilitate domestic and international business. Data integrity and security are very hot and pressing issues for electronic commerce.
Business models across the world also continue to change drastically with the advent of eCommerce and this change is not just restricted to USA. Other countries are also contributing to the growth of eCommerce.
For example, the United Kingdom has the biggest e-commerce market in the world when measured by the amount spent per capita, even higher than the USA. The internet economy in UK is likely to grow by 10% between 2010 and 2015. This has led to changing dynamics for the advertising industry.
Amongst emerging economies, China's eCommerce presence continues to expand. With 384 million internet users, China's online shopping sales rose to $36.6 billion in 2009 and one of the reasons behind the huge growth has been the improved trust level for shoppers.
The Chinese retailers have been able to help consumers feel more comfortable shopping online. E-Commerce has become an important tool for businesses worldwide not only to sell to customers but also to engage them.
Impact on markets and retailers
Economists have theorized that e-commerce ought to lead to intensified price competition, as it increases consumers' ability to gather information about products and prices. Research by four economists at the University of Chicago has found that the growth of online shopping has also affected industry structure in two areas that have seen significant growth in e-commerce, bookshops and travel agencies.
Generally, larger firms have grown at the expense of smaller ones, as they are able to use economies of scale and offer lower prices. The lone exception to this pattern has been the very smallest category of bookseller, shops with between one and four employees, which appear to have withstood the trend.
E-commerce has grown in importance as companies have adopted Pure-Click and Brick and Click channel systems. We can distinguish between pure-click and brick and click channel system adopted by companies.
- Pure-Click companies are those that have launched a website without any previous existence as a firm. It is imperative that such companies must set up and operate their e-commerce websites very carefully. Customer service is of paramount importance.
- Brick and Click companies are those existing companies that have added an online site for e-commerce. Initially, Brick and Click companies were skeptical whether or not to add an online e-commerce channel for fear that selling their products might produce channel conflict with their off-line retailers, agents, or their own stores. However, they eventually added internet to their distribution channel portfolio after seeing how much business their online competitors were generating.
To Keywords Definition Selection List!
An affiliate is a commercial entity with a relationship with a peer or a larger entity.
A corporation may be referred to as an affiliate of another when it is related to it but not strictly controlled by it, as with a subsidiary relationship, or when it is desired to avoid the appearance of control. This is sometimes seen with companies that need to avoid restrictive laws (or negative public opinion) on foreign ownership.
For the concept as exercised in the North American broadcasting industry see network affiliate.
Affiliate marketing typically refers to an electronic commerce version of the traditional agent/referral fee sales channel concept. An e-commerce affiliate is a website which links back to an e-commerce site with the goal of making a commission for referred sales.
However, as e-commerce continues to evolve, e-commerce affiliates are no longer restricted to website owners. Bloggers and members of different online community forums can be affiliates as well. Many emerging affiliate programs are now accepting bloggers and individuals, not necessarily webmasters, to be affiliates.
Affiliates can also be referred to as publishers. Affiliate marketers don't necessarily have to be affiliate marketers specifically. Sometimes such marketers can be the e-commerce web site that actually sells the products and services. The advantage of this method of marketing is that it cuts out the middleman but it does require the affiliates to have a high degree of trust in the software and people behind the e-commerce web site in question.
In electronic commerce, affiliates earn money in three ways:
- per click
- per sale, and
- per lead.
Section word is discussed. | Top of Page!
Affiliate marketing is a marketing practice in which a business rewards one or more affiliates for each visitor or customer brought about by the affiliate's own marketing efforts.
Examples include rewards sites, where users are rewarded with cash or gifts, for the completion of an offer, and the referral of others to the site. The industry has four core players: the merchant (also known as 'retailer' or 'brand'), the network, the publisher (also known as 'the affiliate'), and the customer. The market has grown in complexity to warrant a secondary tier of players, including affiliate management agencies, super-affiliates and specialized third party vendors.
Affiliate marketing overlaps with other Internet marketing methods to some degree, because affiliates often use regular advertising methods. Those methods include organic search engine optimization, paid search engine marketing, e-mail marketing, and in some sense display advertising. On the other hand, affiliates sometimes use less orthodox techniques, such as publishing reviews of products or services offered by a partner.
Affiliate marketing—using one website to drive traffic to another—is a form of online marketing, which is frequently overlooked by advertisers. While search engines, e-mail, and website syndication capture much of the attention of online retailers, affiliate marketing carries a much lower profile. Still, affiliates continue to play a significant role in e-retailers' marketing strategies.
The concept of revenue sharing—paying commission for referred business—predates affiliate marketing and the Internet. The translation of the revenue share principles to mainstream e-commerce happened in November 1994, almost four years after the origination of the World Wide Web.
Cybererotica was among the early innovators in affiliate marketing with a cost per click program.
During November 1994, CDNOW launched its BuyWeb program. CDNOW had the idea that music-oriented websites could review or list albums on their pages that their visitors may be interested in purchasing. These websites could also offer a link that would take the visitor directly to CDNOW to purchase the albums.
The idea for remote purchasing originally arose because of conversations with music label Geffen Records in the fall of 1994. The management at Geffen wanted to sell its artists' CDs directly from its website, but did not want to implement this capability itself. Geffen asked CDNOW if it could design a program where CDNOW would handle the order fulfillment. Geffen realized that CDNOW could link directly from the artist on its website to Geffen's website, bypassing the CDNOW home page and going directly to an artist's music page.
Amazon.com (Amazon) launched its associate program in July 1996: Amazon associates could place banner or text links on their site for individual books, or link directly to the Amazon home page.
When visitors clicked from the associate's website through to Amazon and purchased a book, the associate received a commission. Amazon was not the first merchant to offer an affiliate program, but its program was the first to become widely known and serve as a model for subsequent programs.
In February 2000, Amazon announced that it had been granted a patent on components of an affiliate program. The patent application was submitted in June 1997, which predates most affiliate programs, but not PC Flowers & Gifts.com (October 1994), AutoWeb.com (October 1995), Kbkids.com/BrainPlay.com (January 1996), EPage (April 1996), and several others.
Affiliate marketing has grown quickly since its inception. The e-commerce website, viewed as a marketing toy in the early days of the Internet, became an integrated part of the overall business plan and in some cases grew to a bigger business than the existing offline business.
According to one report, the total sales amount generated through affiliate networks in 2006 was £2.16 billion in the United Kingdom alone. The estimates were £1.35 billion in sales in 2005. MarketingSherpa's research team estimated that, in 2006, affiliates worldwide earned US$6.5 billion in bounty and commissions from a variety of sources in retail, personal finance, gaming and gambling, travel, telecom, education, publishing, and forms of lead generation other than contextual advertising programs.
Currently the most active sectors for affiliate marketing are the adult, gambling, retail industries and file-sharing services. The three sectors expected to experience the greatest growth are the mobile phone, finance, and travel sectors. Soon after these sectors came the entertainment (particularly gaming) and Internet-related services (particularly broadband) sectors. Also several of the affiliate solution providers expect to see increased interest from business-to-business marketers and advertisers in using affiliate marketing as part of their mix.
Websites and services based on Web 2.0 concepts—blogging and interactive online communities, for example—have impacted the affiliate marketing world as well. The new media allowed merchants to become closer to their affiliates and improved the communication between them.
Predominant compensation methods
Eighty percent of affiliate programs today use revenue sharing or pay per sale (PPS) as a compensation method, nineteen percent use cost per action (CPA), and the remaining programs use other methods such as cost per click (CPC) or cost per mille (CPM).
Diminished compensation methods
Within more mature markets, less than one percent of traditional affiliate marketing programs today use cost per click and cost per mille. However, these compensation methods are used heavily in display advertising and paid search.
Cost per mille requires only that the publisher make the advertising available on his website and display it to his visitors in order to receive a commission. Pay per click requires one additional step in the conversion process to generate revenue for the publisher: A visitor must not only be made aware of the advertisement, but must also click on the advertisement to visit the advertiser's website.
Cost per click was more common in the early days of affiliate marketing, but has diminished in use over time due to click fraud issues very similar to the click fraud issues modern search engines are facing today. Contextual advertising programs are not considered in the statistic pertaining to diminished use of cost per click, as it is uncertain if contextual advertising can be considered affiliate marketing.
While these models have diminished in mature e-commerce and online advertising markets they are still prevalent in some more nascent industries. China is one example where Affiliate Marketing does not overtly resemble the same model in the West. With many affiliates being paid a flat "Cost Per Day" with some networks offering Cost Per Click or CPM.
In the case of cost per mille/click, the publisher is not concerned about a visitor being a member of the audience that the advertiser tries to attract and is able to convert, because at this point the publisher has already earned his commission. This leaves the greater, and, in case of cost per mille, the full risk and loss (if the visitor can not be converted) to the advertiser.
Cost per action/sale methods require that referred visitors do more than visit the advertiser's website before the affiliate receives commission. The advertiser must convert that visitor first. It is in the best interest for the affiliate to send the most closely targeted traffic to the advertiser as possible to increase the chance of a conversion. The risk and loss is shared between the affiliate and the advertiser.
Affiliate marketing is also called "performance marketing", in reference to how sales employees are typically being compensated. Such employees are typically paid a commission for each sale they close, and sometimes are paid performance incentives for exceeding targeted baselines.
Affiliates are not employed by the advertiser whose products or services. They promote, but the compensation models applied to affiliate marketing are very similar to the ones used for people in the advertisers' internal sales department.
The phrase, "Affiliates are an extended sales force for your business", which is often used to explain affiliate marketing, is not completely accurate. The primary difference between the two is that affiliate marketers provide little if any influence on a possible prospect in the conversion process once that prospect is directed to the advertiser's website. The sales team of the advertiser, however, does have the control and influence up to the point where the prospect signs the contract or completes the purchase.
Some advertisers offer multi-tier programs that distribute commission into a hierarchical referral network of sign-ups and sub-partners. In practical terms, publisher "A" signs up to the program with an advertiser and gets rewarded for the agreed activity conducted by a referred visitor. If publisher "A" attracts publishers "B" and "C" to sign up for the same program using his sign-up code, all future activities performed by publishers "B" and "C" will result in additional commission (at a lower rate) for publisher "A".
Two-tier programs exist in the minority of affiliate programs; most are simply one-tier. Referral programs beyond two-tier resemble multi-level marketing (MLM) or network marketing but are different: Multi-level marketing (MLM) or network marketing associations tend to have more complex commission requirements/qualifications than standard affiliate programs.
From the advertiser perspective
Pros and cons
Merchants favor affiliate marketing because in most cases it uses a "pay for performance" model, meaning that the merchant does not incur a marketing expense unless results are accrued (excluding any initial setup cost). Some businesses owe much of their success to this marketing technique, a notable example being Amazon.com. Unlike display advertising, however, affiliate marketing is not easily scalable.
Some merchants run their own (i.e., in-house) affiliate programs. They use popular software while others use third-party services provided by intermediaries to track traffic or sales that are referred from affiliates.
Merchants can choose from two different types of affiliate management solutions: standalone software or hosted services, typically called affiliate networks. Payouts to affiliates or publishers are either made by the networks on behalf of the merchant, by the network, consolidated across all merchants where the publisher has a relationship with and earned commissions or directly by the merchant itself.
Affiliate management and program management outsourcing
Successful affiliate programs require significant work and maintenance. Having a successful affiliate program is more difficult than when such programs were just emerging. With the exception of some vertical markets, it is rare for an affiliate program to generate considerable revenue with poor management or no management (i.e., "auto-drive").
Uncontrolled affiliate programs did—and continue to do so today—aid rogue affiliates, who use spamming, trademark infringement, false advertising, "cookie cutting", typosquatting, and other unethical methods that have given affiliate marketing a negative reputation.
The increased number of Internet businesses and the increased number of people that trust the current technology enough to shop and do business online allows further maturation of affiliate marketing. The opportunity to generate a considerable amount of profit combined with a crowded marketplace filled with competitors of equal quality and size makes it more difficult for merchants to be noticed. In this environment, however, being noticed can yield greater rewards.
Recently, the Internet marketing industry has become more advanced. In some areas online media has been rising to the sophistication of offline media, in which advertising has been largely professional and competitive.
There are significantly more requirements that merchants must meet to be successful, and those requirements are becoming too burdensome for the merchant to manage successfully in-house. An increasing number of merchants are seeking alternative options.
New outsourced (affiliate) program management (OPM) companies, which are often founded by veteran affiliate managers and network program managers. OPM companies perform affiliate program management for the merchants as a service, similar to advertising agencies promoting a brand or product as done in offline marketing.
Types of affiliate websites
Affiliate websites are often categorized by merchants (i.e., advertisers) and affiliate networks. There are currently no industry-wide accepted standards for the categorization. The following types of websites are generic, yet are commonly understood and used by affiliate marketers.
- Search affiliates that utilize pay per click search engines to promote the advertisers' offers (i.e., search arbitrage)
- Comparison shopping websites and directories
- Loyalty websites, typically characterized by providing a reward system for purchases via points back, cash back
- CRM sites that offer charitable donations
- Coupon and rebate websites that focus on sales promotions
- Content and niche market websites, including product review sites
- Personal websites
- Weblogs and website syndication feeds
- E-mail list affiliates (i.e., owners of large opt-in -mail lists that typically employ e-mail drip marketing) and newsletter list affiliates, which are typically more content-heavy
- Registration path or co-registration affiliates who include offers from other merchants during the registration process on their own website
- Shopping directories that list merchants by categories without providing coupons, price comparisons, or other features based on information that changes frequently, thus requiring continual updates
- Cost per action networks (i.e., top-tier affiliates) that expose offers from the advertiser with which they are affiliated to their own network of affiliates
- Websites using adbars (e.g. Adsense) to display context-sensitive, highly relevant ads for products on the site
- Virtual Currency: a new type of publisher that utilizes the social media space to couple an advertiser's offer with a handout of "virtual currency" in a game or virtual platform.
- Video Blog: Video content that allows viewers to click on and purchase products related to the video's subject.
- File-Sharing: Web sites that host directories of music, movies, games and other software. Users upload content (usually in violation of copyright) to file-hosting sites, and then post descriptions of the material and their download links on directory sites. Uploaders are paid by the file-hosting sites based on the number of times their files are downloaded. The file-hosting sites sell premium download access to the files to the general public. The web sites that host the directory services sell advertising and do not host the files themselves.
Affiliate networks that already have several advertisers typically also have a large pool of publishers. These publishers could be potentially recruited, and there is also an increased chance that publishers in the network apply to the program on their own, without the need for recruitment efforts by the advertiser.
Relevant websites that attract the same target audiences as the advertiser but without competing with it are potential affiliate partners as well. Vendors or existing customers can also become recruits if doing so makes sense and does not violate any laws or regulations.
Almost any website could be recruited as an affiliate publisher, although high-traffic websites are more likely interested in (for their own sake) low-risk cost per mille or medium-risk cost per click deals rather than higher-risk cost per action or revenue share deals.
Locating affiliate programs
There are three primary ways to locate affiliate programs for a target website:
1. Affiliate program directories,
2. Large affiliate networks that provide the platform for dozens or even hundreds of advertisers, and
3. The target website itself. (Websites that offer an affiliate program often have a link titled "affiliate program", "affiliates", "referral program", or "webmasters"—usually in the footer or "About" section of the website.)
If the above locations do not yield information pertaining to affiliates, it may be the case that there exists a non-public affiliate program. Utilizing one of the common website correlation methods may provide clues about the affiliate network. The most definitive method for finding this information is to contact the website owner directly, if a contact method can be located.
Past and current issues
Since the emergence of affiliate marketing, there has been little control over affiliate activity. Unscrupulous affiliates have used spam, false advertising, forced clicks (to get tracking cookies set on users' computers), adware, and other methods to drive traffic to their sponsors. Although many affiliate programs have terms of service that contain rules against spam, this marketing method has historically proven to attract abuse from spammers.
In the infancy of affiliate marketing, many Internet users held negative opinions due to the tendency of affiliates to use spam to promote the programs in which they were enrolled. As affiliate marketing matured, many affiliate merchants have refined their terms and conditions to prohibit affiliates from spamming.
Search engine spam
As search engines have become more prominent, some affiliate marketers have shifted from sending e-mail spam to creating automatically generated webpages that often contain product data feeds provided by merchants. The goal of such webpages is to manipulate the relevancy or prominence of resources indexed by a search engine, also known as spamdexing. Each page can be targeted to a different niche market through the use of specific keywords, with the result being a skewed form of search engine optimization.
Spam is the biggest threat to organic search engines, whose goal is to provide quality search results for keywords or phrases entered by their users. Google's PageRank algorithm update ("BigDaddy") in February 2006—the final stage of Google's major update ("Jagger") that began in mid-summer 2005—specifically targeted spamdexing with great success. This update thus enabled Google to remove a large amount of mostly computer-generated duplicate content from its index.
Websites consisting mostly of affiliate links have previously held a negative reputation for under delivering quality content. In 2005 there were active changes made by Google, where certain websites were labeled as "thin affiliates".
Such websites were either removed from Google's index or were relocated within the results page (i.e., moved from the top-most results to a lower position). To avoid this categorization, affiliate marketer webmasters must create quality content on their websites that distinguishes their work from the work of spammers or banner farms, which only contain links leading to merchant sites.
Some commentators originally suggested that Affiliate links work best in the context of the information contained within the website itself. For instance, if a website contains information pertaining to publishing a website, an affiliate link leading to a merchant's Internet service provider (ISP) within that website's content would be appropriate.
If a website contains information pertaining to sports, an affiliate link leading to a sporting goods website may work well within the context of the articles and information about sports. The goal in this case is to publish quality information within the website and provide context-oriented links to related merchant's websites.
However, more recent examples exist of "thin" affiliate sites that are using the Affiliate Marketing model to create value for Consumers by offering them a service. These thin content service Affiliate fall into three categories:
- Price comparison
- Cause related marketing
- Time saving
Virus and Trojan distribution through advertising networks
Server farms hosting advertising content are periodically infected by hackers who alter the behavior of these servers. The content they serve to end-users includes hidden I-frames and other exploits. This can leverage vulnerabilities in various web-browsers and operating systems for the purpose of infecting those systems with malware.
End users frequently confuse the source of their computer infection with a particular website they were viewing at the time, and not the advertising network that was linked to, by the website (commonly users themselves do not understand or appreciate there is a distinction).
The implementation of affiliate marketing on the internet relies heavily on various techniques built into the design of many web-pages and web-sites, and the use of calls to external domains to track user actions (click tracking, Ad Sense) and to serve up content (advertising) to the user. Most of this activity adds time and is generally a nuisance to the casual web-surfer and is seen as visual clutter.
Various countermeasures have evolved over time to prevent or eliminate the appearance of advertising when a web-page is rendered. Third party programs (Ad Aware, SpyBot, pop-up blockers, etc.) and particularly, the use of a comprehensive HOSTS file can effectively eliminate the visual clutter and the extra time and bandwidth needed to render many web pages.
The use of specific entries in the HOSTS file to block these well-known and persistent marketing and click-tracking domains can also aid in reducing a system's exposure to malware by preventing the content of infected advertising or tracking servers to reach a user's web-browser.
Although it differs from spyware, adware often uses the same methods and technologies. Merchants initially were uninformed about adware, what impact it had, and how it could damage their brands. Affiliate marketers became aware of the issue much more quickly, especially because they noticed that adware often overwrites tracking cookies, thus resulting in a decline of commissions.
Affiliates not employing adware felt that it was stealing commission from them. Adware often has no valuable purpose and rarely provides any useful content to the user, who is typically unaware that such software is installed on his/her computer.
Affiliates discussed the issues in Internet forums and began to organize their efforts. They believed that the best way to address the problem was to discourage merchants from advertising via adware. Merchants that were either indifferent to or supportive of adware were exposed by affiliates, thus damaging those merchants' reputations and tarnishing their affiliate marketing efforts.
Many affiliates either terminated the use of such merchants or switched to a competitor's affiliate program. Eventually, affiliate networks were also forced by merchants and affiliates to take a stand and ban certain adware publishers from their network.
The result was Code of Conduct by Commission Junction/beFree and Performics, LinkShare's Anti-Predatory Advertising Addendum, and ShareASale's complete ban of software applications as a medium for affiliates to promote advertiser offers. Regardless of the progress made, adware continues to be an issue, as demonstrated by the class action lawsuit against ValueClick and its daughter company Commission Junction filed on April 20, 2007.
Affiliates were among the earliest adopters of pay per click advertising when the first pay-per-click search engines emerged during the end of the 1990s. Later in 2000 Google launched its pay per click service, Google AdWords, which is responsible for the widespread use and acceptance of pay per click as an advertising channel.
An increasing number of merchants engaged in pay per click advertising, either directly or via a search marketing agency, and realized that this space was already well-occupied by their affiliates. Although this situation alone created advertising channel conflicts and debates between advertisers and affiliates, the largest issue concerned affiliates bidding on advertisers’ names, brands, and trademarks.
Several advertisers began to adjust their affiliate program terms to prohibit their affiliates from bidding on those type of keywords. Some advertisers, however, did and still do embrace this behavior, going so far as to allow, or even encourage, affiliates to bid on any term, including the advertiser's trademarks. And some affiliates abuse it by bidding on those terms by excluding the location of the advertiser alone in many Search engines.
Lack of self-regulation and collaboration
Affiliate marketing is driven by entrepreneurs who are working at the edge of Internet marketing. Affiliates are often the first to take advantage of emerging trends and technologies.
This risky approach is one of the reasons why most affiliates fail or give up before they become successful "super affiliates", capable of generating US$10,000 or more per month in commission. This "frontier" life combined with the attitude found in such communities is likely the main reason why the affiliate marketing industry is unable to self-regulate beyond individual contracts between advertisers and affiliates.
The "trial and error" approach is probably the best way to describe the operation methods for affiliate marketers. Affiliate marketing has experienced numerous failed attempts to create an industry organization or association of some kind that could be the initiator of regulations, standards', and guidelines for the industry. Some examples of failed regulation efforts are the Affiliate Union and iAfma.
Online forums and industry trade shows are the only means for the different members from the industry—affiliates/publishers, merchants/advertisers, affiliate networks, third-party vendors, and service providers such as outsourced program managers—to congregate at one location. Online forums are free, enable small affiliates to have a larger say, and provide anonymity.
Trade shows are cost-prohibitive to small affiliates because of the high price for event passes. Larger affiliates may even be sponsored by an advertiser they promote.
Be cause of the anonymity of online forums, the quantitative majority of industry members are unable to create any form of legally binding rule or regulation that must be followed throughout the industry. Online forums have had very few successes as representing the majority of the affiliate marketing industry.
Lack of industry standards
Certification and training
Affiliate marketing currently lacks industry standards for training and certification. There are some training courses and seminars that result in certifications; however, the acceptance of such certifications is mostly due to the reputation of the individual or company issuing the certification. Affiliate marketing is not commonly taught in universities, and only a few college instructors work with Internet marketers to introduce the subject to students majoring in marketing.
Education occurs most often in "real life" by becoming involved and learning the details as time progresses. Although there are several books on the topic, some so-called "how-to" or "silver bullet" books instruct readers to manipulate holes in the Google algorithm, which can quickly become out of date, or suggest strategies no longer endorsed or permitted by advertisers.
Outsourced Program Management companies typically combine formal and informal training, providing much of their training through group collaboration and brainstorming. Such companies also try to send each marketing employee to the industry conference of their choice.
Other training resources used include online forums, weblogs, podcasts, video seminars, and specialty websites.
Affiliate Summit is the largest conference in the industry, and many other affiliate networks host their own annual events.
Code of conduct
A code of conduct was released by affiliate networks Commission Junction/beFree and Performics in December 2002 to guide practices and adherence to ethical standards for online advertising.
Members of the marketing industry are recommending that "affiliate marketing" be substituted with an alternative name. Affiliate marketing is often confused with either network marketing or multi-level marketing. Performance marketing is a common alternative, but other recommendations have been made as well.
Sales tax vulnerability
In April 2008 the State of New York inserted an item in the state budget asserting sales tax jurisdiction over Amazon.com sales to residents of New York, based on the existence of affiliate links from New York–based websites to Amazon. The state asserts that even one such affiliate constitutes Amazon having a business presence in the state, and is sufficient to allow New York to tax all Amazon sales to state residents.
Amazon challenged the amendment and lost at the trial level in January, 2009. The case is currently making its way through the New York appeals courts.
Cookie stuffing involves placing an affiliate tracking cookie on a website visitor's computer without their knowledge, which will then generate revenue for the person doing the cookie stuffing. This not only generates fraudulent affiliate sales, but also has the potential to overwrite other affiliates' cookies, essentially stealing their legitimately earned commissions.
Click to reveal
Many voucher code web sites use a click-to-reveal format, which requires the web site user to click to reveal the voucher code. The action of clicking places the cookie on the website visitor's computer. The IAB have stated that "Affiliates must not use a mechanism whereby users are encouraged to click to interact with content where it is unclear or confusing what the outcome will be."
Section word is discussed. | Top of Page!
Automated Clearing House
Automated Clearing House (ACH) is an electronic network for financial transactions in the United States. ACH processes large volumes of credit and debit transactions in batches. ACH credit transfers include direct deposit payroll and vendor payments. ACH direct debit transfers include consumer payments on insurance premiums, mortgage loans, and other kinds of bills.
Debit transfers also include new applications such as the point-of-purchase (POP) check conversion pilot program sponsored by NACHA-The Electronic Payments Association. Both the government and the commercial sectors use ACH payments. Businesses increasingly use ACH online to have customers pay, rather than via credit or debit cards.
Rules and regulations that govern the ACH network are established by NACHA (formerly the National Automated Clearing House Association) and the Federal Reserve. In 2002, this network processed an estimated 8.05 billion ACH transactions with a total value of $21.7 trillion. (Credit card payments are handled by separate networks.)
The Federal Reserve Banks are collectively the nation's largest automated clearinghouse operator, and in 2005 processed 60% of commercial interbank ACH transactions.
The Electronic Payments Network (EPN), the only private-sector ACH operator in the US, processed the remaining 40%. FedACH is the Federal Reserve's centralized application software used to process ACH transactions.
EPN and the Reserve Banks rely on each other for the processing of some transactions when either party to the transaction is not their customer. These interoperator transactions are settled by the Reserve Banks.
Section word is discussed. | Top of Page!
Business-to-business (B2B) describes commerce transactions between businesses, such as between a manufacturer and a wholesaler, or between a wholesaler and a retailer. Contrasting terms are business-to-consumer (B2C) and business-to-government (B2G).
The volume of B2B (Business-to-Business) transactions is much higher than the volume of B2C transactions. The primary reason for this is that in a typical supply chain there will be many B2B transactions involving sub components or raw materials, and only one B2C transaction, specifically sale of the finished product to the end customer.
For example, an automobile manufacturer makes several B2B transactions such as buying tires, glass for windscreens, and rubber hoses for its vehicles. The final transaction, a finished vehicle sold to the consumer, is a single (B2C) transaction.
B2B is also used in the context of communication and collaboration. Many businesses are now using social media to connect with their consumers (B2C); however, they are now using similar tools within the business so employees can connect with one another. When communication is taking place amongst employees, this can be referred to as "B2B" communication.
The term was originally coined to describe the electronic communications between businesses or enterprises in order to distinguish it from the communications between businesses and consumers ("business-to-consumer"). It eventually came to be used in marketing as well, initially describing only industrial or capital goods marketing.
Today it is widely used to describe all products and services used by enterprises. Many professional institutions and the trade publications focus much more on B2C than B2B, although most sales and marketing personnel are in the B2B sector.
Section word is discussed. | Top of Page!
Retail consists of the sale of physical goods or merchandise from a fixed location, such as a department store, boutique or kiosk, or by mail, in small or individual lots for direct consumption by the purchaser. Retailing may include subordinated services, such as delivery. Purchasers may be individuals or businesses.
In commerce, a "retailer" buys goods or products in large quantities from manufacturers or importers, either directly or through a wholesaler, and then sells smaller quantities to the end-user. Retail establishments are often called shops or stores.
Retailers are at the end of the supply chain. Manufacturing marketers see the process of retailing as a necessary part of their overall distribution strategy. The term "retailer" is also applied where a service provider services the needs of a large number of individuals, such as a public utility, like electric power.
Shops may be on residential streets, shopping streets with few or no houses or in a shopping mall. Shopping streets may be for pedestrians only. Sometimes a shopping street has a partial or full roof to protect customers from precipitation. Online retailing, a type of electronic commerce used for business-to-consumer (B2C) transactions and mail order, are forms of non-shop retailing.
Shopping generally refers to the act of buying products. Sometimes this is done to obtain necessities such as food and clothing; sometimes it is done as a recreational activity. Recreational shopping often involves window shopping (just looking, not buying) and browsing and does not always result in a purchase.
Retail comes from the Old French word tailer (compare modern French retailler), which means "to cut off, clip, pare, divide" in terms of tailoring (1365). It was first recorded as a noun with the meaning of a "sale in small quantities" in 1433 (from the Middle French retail, "piece cut off, shred, scrap, paring"). Like the French, the word retail in both Dutch and German (detailhandel and Einzelhandel, respectively) also refers to the sale of small quantities of items.
Types of retail outlets
A marketplace is a location where goods and services are exchanged. The traditional market square is a city square where traders set up stalls and buyers browse the merchandise. This kind of market is very old, and countless such markets are still in operation around the whole world.
In some parts of the world, the retail business is still dominated by small family-run stores, but this market is increasingly being taken over by large retail chains.
Retail is usually classified by type of products as follows:
- Food products
- Hard goods ("hard-line retailers") - appliances, electronics, furniture, sporting goods, etc.
- Soft goods - clothing, apparel, and other fabrics.
There are the following types of retailers by marketing strategy:
- Department stores - very large stores offering a huge assortment of "soft" and "hard goods; often bear a resemblance to a collection of specialty stores. A retailer of such store carries variety of categories and has broad assortment at average price. They offer considerable customer service.
- Discount stores - tend to offer a wide array of products and services, but they compete mainly on price offers extensive assortment of merchandise at affordable and cut-rate prices. Normally retailers sell less fashion-oriented brands.
- Supermarkets - sell mostly food products;
- Warehouse stores - warehouses that offer low-cost, often high-quantity goods piled on pallets or steel shelves; warehouse clubs charge a membership fee;
- Variety stores - these offer extremely low-cost goods, with limited selection;
- Demographic - retailers that aim at one particular segment (e.g., high-end retailers focusing on wealthy individuals).
- Mom-And-Pop : is a retail outlet that is owned and operated by individuals. The range of products are very selective and few in numbers. These stores are seen in local community often are family-run businesses. The square feet area of the store depends on the store holder.
- Specialty stores: A typical speciality store gives attention to a particular category and provides high level of service to the customers. A pet store that specializes in selling dog food would be regarded as a specialty store. However, branded stores also come under this format. For example if a customer visits a Reebok or Gap store then they find just Reebok and Gap products in the respective stores.
- General store - a rural store that supplies the main needs for the local community;
- Convenience stores: is essentially found in residential areas. They provide limited amount of merchandise at more than average prices with a speedy checkout. This store is ideal for emergency and immediate purchases.
- Hypermarkets: provides variety and huge volumes of exclusive merchandise at low margins. The operating cost is comparatively less than other retail formats.
- Supermarkets: is a self service store consisting mainly of grocery and limited products on non food items. They may adopt a Hi-Lo or an EDLP strategy for pricing. The supermarkets can be anywhere between 20,000 and 40,000 square feet (3,700 m2). Example: SPAR supermarket.
- Malls: has a range of retail shops at a single outlet. They endow with products, food and entertainment under a roof.
- Category killers or Category Specialist: By supplying wide assortment in a single category for lower prices a retailer can "kill" that category for other retailers. For few categories, such as electronics, the products are displayed at the centre of the store and sales person will be available to address customer queries and give suggestions when required. Other retail format stores are forced to reduce the prices if a category specialist retail store is present in the vicinity.
- E-tailers: The customer can shop and order through internet and the merchandise are dropped at the customer's doorstep. Here the retailers use drop shipping technique. They accept the payment for the product but the customer receives the product directly from the manufacturer or a wholesaler. This format is ideal for customers who do not want to travel to retail stores and are interested in home shopping. However it is important for the customer to be wary about defective products and non secure credit card transaction. Example: Amazon, Pennyful and Ebay.
- Vending Machines: This is an automated piece of equipment wherein customers can drop in the money in machine and acquire the products.
Some stores take a no frills approach, while others are "mid-range" or "high end", depending on what income level they target.
Other types of retail store include:
- Automated Retail stores are self service, robotic kiosks located in airports, malls and grocery stores. The stores accept credit cards and are usually open 24/7. Examples include ZoomShops and Redbox.
- Big-box stores encompass larger department, discount, general merchandise, and warehouse stores.
- Convenience store - a small store often with extended hours, stocking everyday or roadside items;
- General store - a store which sells most goods needed, typically in a rural area;
Retailers can opt for a format as each provides different retail mix to its customers based on their customer demographics, lifestyle and purchase behavior. A good format will lend a hand to display products well and entice the target customers to spawn sales.
The pricing technique used by most retailers is cost-plus pricing. This involves adding a markup amount (or percentage) to the retailer's cost. Another common technique is suggested retail pricing. This simply involves charging the amount suggested by the manufacturer and usually printed on the product by the manufacturer.
In Western countries, retail prices are often called psychological prices or odd prices. Often prices are fixed and displayed on signs or labels. Alternatively, when prices are not clearly displayed, there can be price discrimination, where the sale price is dependent upon the customer.
For example, a customer may have to pay more if the seller determines that he or she is willing and/or able to. Another example would be the practice of discounting for youths, students, or senior citizens..
There are several ways in which consumers can receive goods from a retailer:
- Counter service, where goods are out of reach of buyers and must be obtained from the seller. This type of retail is common for small expensive items (e.g. jewelry) and controlled items like medicine and liquor. It was common before the 1900s in the United States and is more common in certain countries like India.
- Delivery, where goods are shipped directly to consumer's homes or workplaces. Mail order from a printed catalog was invented in 1744 and was common in the late 19th and early 20th centuries. Ordering by telephone is now common, either from a catalog, newspaper, television advertisement or a local restaurant menu, for immediate service (especially for pizza delivery). Direct marketing, including telemarketing and television shopping channels, are also used to generate telephone orders. started gaining significant market share in developed countries in the 2000s.
- Door-to-door sales, where the salesperson sometimes travels with the goods for sale.
- Self-service, where goods may be handled and examined prior to purchase
Second hand retail
Some shops sell second-hand goods. In the case of a nonprofit shop, the public donates goods to the shop to be sold. In give-away shops goods can be taken for free.
Another form is the pawnshop, in which goods are sold that were used as collateral for loans. There are also "consignment" shops, which are where a person can place an item in a store and if it sells, the person gives the shop owner a percentage of the sale price. The advantage of selling an item this way is that the established shop gives the item exposure to more potential buyers.
To achieve and maintain a foothold in an existing market, a prospective retail establishment must overcome the following hurdles:
- Regulatory barriers including
- Restrictions on real estate purchases, especially as imposed by local governments and against "big-box" chain retailers;
- Restrictions on foreign investment in retailers, in terms of both absolute amount of financing provided and percentage share of voting stock (e.g., common stock) purchased;
- Unfavorable taxation structures, especially those designed to penalize or keep out "big box" retailers (see "Regulatory" above);
- Absence of developed supply chain and integrated IT management;
- High competitiveness among existing market participants and resulting low profit margins, caused in part by
- Constant advances in product design resulting in constant threat of product obsolescence and price declines for existing inventory; and
- Lack of properly educated and/or trained work force, often including management, caused in part by
- Lack of educational infrastructure enabling prospective market entrants to respond to the above challenges.
Behind the scenes at retail, there is another factor at work. Corporations and independent store owners alike are always trying to get the edge on their competitors.
One way to do this is to hire a merchandising solutions company to design custom store displays that will attract more customers in a certain demographic. The nation's largest retailers spend millions every year on in-store marketing programs that correspond to seasonal and promotional changes.
As products change, so will a retail landscape. Retailers can also use facing techniques to create the look of a perfectly stocked store, even when it is not.
A destination store is one that customers will initiate a trip specifically to visit, sometimes over a large area. These stores are often used to "anchor" a shopping mall or plaza, generating foot traffic, which is capitalized upon by smaller retailers.
Customer service is the "sum of acts and elements that allow consumers to receive what they need or desire from your retail establishment." It is important for a sales associate to greet the customer and make himself available to help the customer find whatever he needs.
When a customer enters the store, it is important that the sales associate does everything in his power to make the customer feel welcomed, important, and make sure he leave the store satisfied. Giving the customer full, undivided attention and helping him find what he is looking for will contribute to the customer's satisfaction.
The Retail Sales report is published every month. It is a measure of consumer spending, an important indicator of the US GDP. Retail firms provide data on the dollar value of their retail sales and inventories. A sample of 12,000 firms is included in the final survey and 5,000 in the advanced one. The advanced estimated data is based on a subsample from the US CB complete retail & food services sample.
It has been published by the US Census Bureau since 1951.
Among retailers and retails chains a lot of consolidation has appeared over the last couple of decades. Between 1988 and 2010, worldwide 40'788 mergers & acquisitions with a total known value of 2'255 billion USD have been announced.
The largest transactions with involvement of retailers in/from the United States have been: the acquisition of Albertson's Inc. for 17 bil. USD in 2006, the merger between Federated Department Stores Inc with May Department Stores valued at 16.5 billion USD in 2005 - now Macy's, and the merger between Kmart Holding Corp and Sears Roebuck & Co with a value of 10.9 billion USD in 2004.
Section word is discussed. | Top of Page!
Civil law (legal system)
Civil law (or civilian law) is a legal system inspired by Roman law and whose primary feature is that laws are written into a collection, codified, and not (as in common law) co-created by judges.
Conceptually, it is the group of legal ideas and systems ultimately derived from the Code of Justinian, but heavily overlaid by Germanic, ecclesiastical, feudal, and local practices, as well as doctrinal strains such as natural law, codification, and legislative positivism.
Materially, civil law proceeds from abstractions, formulates general principles, and distinguishes substantive rules from procedural rules. It holds legislation as the primary source of law, and the court system is usually inquisitorial, unbound by precedent, and composed of specially trained judicial officers with a limited authority to interpret law. Juries separate from the judges are not used, although in some cases, volunteer lay judges participate along with legally trained career judges.
Section word is discussed. | Top of Page!
A common carrier in common-law countries (corresponding to a public carrier in civil-law systems, usually called simply a carrier) is a person or company that transports goods or people for any person or company and that is responsible for any possible loss of the goods during transport. A common carrier offers its services to the general public under license or authority provided by a regulatory body.
The regulatory body has usually been granted “ministerial authority” by the legislation which created it. The regulatory body may create, interpret, and enforce its regulations upon the common carrier (subject to judicial review) with independence and finality, as long as it acts within the bounds of the enabling legislation.
A common carrier is distinguished from a contract carrier (also called a public carrier in UK English), which is a carrier that transports goods for only a certain number of clients and that can refuse to transport goods for anyone else, and from a private carrier. A common carrier holds itself out to provide service to the general public without discrimination (to meet the needs of the regulator's quasi judicial role of impartiality toward the public's interest) for the "public convenience and necessity".
A common carrier must further demonstrate to the regulator that it is "fit, willing, and able" to provide those services for which it is granted authority. Common carriers typically transport persons or goods according to defined and published routes, time schedules, and rate tables upon the approval of regulators.
Public airlines, railroads, bus lines, taxicab companies, cruise ships, motor carriers (i.e., trucking companies), and other freight companies generally operate as common carriers. Under U.S. law, an ocean freight forwarder cannot act as a common carrier.
The term common carrier is a common law term, which is seldom used in continental Europe because it has no exact equivalent in civil-law systems. In continental Europe, the functional equivalent of a common carrier is referred to as a public carrier (or simply as a carrier). (However, public carrier in continental Europe is defined differently than "public carrier" in British English, in which it is a synonym for contract carrier.)
Although common carriers generally transport people or goods, in the United States the term may also refer to telecommunications providers and public utilities. In certain U.S. states, amusement parks that operate roller coasters and comparable rides have been found to be common carriers; a famous example is Disneyland.
Regulatory bodies may also grant carriers the authority to operate under contract with their customers instead of under common carrier authority, rates, schedules and rules. These regulated carriers, known as contract carriers, must demonstrate that they are "fit, willing and able" to provide service, according to standards enforced by the regulator.
However, contract carriers are specifically not required to demonstrate that they will operate for the "public convenience and necessity." A contract carrier may be authorized to provide service over either fixed routes and schedules, i.e., as regular route carrier or on an ad hoc basis as an irregular route carrier.
It should be mentioned that the carrier refers only to the person (legal or physical) that enters into a contract of carriage with the shipper. The carrier does not necessarily have to own or even be in the possession of a means of transport. Unless otherwise agreed upon in the contract, the carrier may use whatever means of transport approved in its operating authority, as long as it is the most favorable from the cargo interests’ point of view. The carriers' duty is to get the goods to the agreed destination within the agreed time or within reasonable time.
The person that is physically transporting the goods on a means of transport is referred to as the "actual carrier". When a carrier subcontracts with another provider, such as an independent contractor or a third-party carrier, the common carrier is said to be providing "substituted service". The same person may hold both common carrier and contract carrier authority.
In the case of a rail line in the U.S., the owner of the property is said to retain a "residual common carrier obligation", unless otherwise transferred (such as in the case of a commuter rail system, where the authority operating passenger trains may acquire the property but not this obligation from the former owner), and must operate the line if service is terminated.
In contrast, private carriers are not licensed to offer a service to the public. Private carriers generally provide transport on an irregular or ad hoc basis for their owners.
Carriers were very common in rural areas prior to motorized transport. Regular services by horse drawn vehicles would ply to local towns, taking goods to market or bringing back purchases for the village. If space permitted, passengers could also travel.
In the telecommunications regulation context in the United States, telecommunications carriers are regulated by the Federal Communications Commission under title II of the Communications Act of 1934.
The Telecommunications Act of 1996 made extensive revisions to the "Title II" provisions regarding common carriers and repealed the judicial 1982 AT&T consent decree (often referred to as the "modification of final judgment" or "MFJ") that effectuated the breakup of AT&T's Bell System. Further, The Act gives telephone companies the option of providing video programming on a common carrier basis or as a conventional cable television operator.
If it chooses the former, the telephone company will face less regulation but will also have to comply with FCC regulations requiring what the Act refers to as "open video systems." The Act generally bars, with certain exceptions including most rural areas, acquisitions by telephone companies of more than a 10 percent interest in cable operators (and vice versa) and joint ventures between telephone companies and cable systems serving the same areas.
Computer networks (for example, the Internet) that are built on top of telecommunications networks are Information Services or Enhanced Services, and are generally regulated under title I of the Communications Act (other networks, such as cable video networks or wireless taxi dispatch networks, are neither telecommunications carrier networks nor information services).
Internet Service Providers have argued against being classified as a "common carrier" and, so far, have managed to do so. The argument of ISPs against common carrier classification has largely conflated "telecommunications carriers" with "common carriers," assuming that if they were labeled as "common carriers," they would be regulated under Title II of the Communications Act by the FCC.
This is incorrect; as noted above, a firm can be a common carrier without being a telecommunications carrier. The FCC proceeding that established that Internet networks are not telecommunications carriers is the Computer Inquiries. A later FCC report, IN RE FEDERAL-STATE JOINT BOARD ON UNIVERSAL SERVICE, Report to Congress, 13 FCC Rcd. 11501 (1998), reviewed this policy (this report was not an order and did not have the effect of regulatory law - it is however, an excellent capture of FCC policy at that time).
The policy of the FCC has evolved. Traditionally, an Internet network information service would acquire its telecommunications needs from a telecommunications carrier. It was an Internet network layered on top of a telecommunications network. Pursuant to recent FCC decisions, Internet DSL and Internet Cable services are now considered combined as one "information service."
There is no telecommunications carrier service underneath for other ISPs to use. This has resulted in a transformation of the ISP market. Previously, thousands of ISPs had access to the telephone network. Now, with no broadband telecommunications carrier service available, there are generally only two Internet broadband providers in a residential market: the cable Internet provider and the DSL Internet provider.
Because ISPs are no longer prohibited from discriminating among different types of content under common carrier law, Internet providers may charge additional fees for certain kinds of services, such as Virtual Private Networks. Some network neutrality supporters advocate reclassifying all ISPs as common carriers in order to prevent content discrimination.
Internet networks are, however, already treated like common carriers in many respects. ISPs are largely immune from liability for third party content. The Good Samaritan provision of the Communications Decency Act established immunity from liability for third party content on grounds of libel or slander. The DMCA established that ISPs which comply with DMCA would not be liable for the copyright violations of third parties on their network.
In the United States, many oil, gas and CO2 pipelines are common carriers. The Federal Energy Regulatory Commission (FERC) regulates rates charged and other tariff terms imposed by interstate common carrier pipelines. Intrastate common carrier pipeline tariffs are often regulated by state agencies. The U.S. and many states have delegated the power of eminent domain to common carrier gas pipelines. Many states have delegated eminent domain power to common carrier oil pipelines.
Common carriers are subject to special laws and regulations which differ depending on the means of transport used, e.g. sea carriers are often governed by quite different rules than road carriers or railway carriers. In common law jurisdictions as well as under international law, a common carrier is absolutely liable for goods carried by it, with four exceptions:
- An act of nature
- An act of the public enemies
- Fault or fraud by the shipper
- An inherent defect in the goods
A sea carrier may also, according to the Hague-Visby Rules, escape liability on other grounds than the above mentioned, e.g. a sea carrier is not liable for damages to the goods if the damage is the result of a fire onboard the ship or the result of a navigational error committed by the ship's master or other crewmember.
Carriers typically incorporate further exceptions into a contract of carriage, often specifically claiming not to be a common carrier.
An important legal requirement for common carrier as public provider is that it cannot discriminate, that is refuse the service unless there is some compelling reason (e.g. post doesn't allow to send cash). As of 2007, the status of Internet Service providers as common carriers and their rights and responsibilities is widely debated (network neutrality).
It is also important to remember that the term common carrier does not exist in continental Europe but is distinctive to common law systems, particularly law systems in the U.S.A.
In Ludditt v Ginger Coote Airways the Privy Council (Lord Macmillan, Lord Wright, Lord Porter and Lord Simonds) held the liability of a public or common carrier of passengers is only to carry with due care. This is more limited than that of a common carrier of goods. The complete freedom of a carrier of passengers at common law to make such contracts as he thinks fit was not curtailed by the Railway and Canal Traffic Act 1854, and a specific contract which enlarges, diminishes or excludes his duty to take care (e.g., by a condition that the passenger travels "at his own risk against all casualties") cannot be pronounced to be unreasonable if the law authorizes it.
There was nothing in the provisions of the Canadian Transport Act 1938 section 25 which would invalidate a provision excluding liability. Grand Trunk Railway Co of Canada v Robinson  A.C. 740 was followed and Peek v North St affordshire Railway 11 E.R. 1109 was distinguished.
Section word is discussed. | Top of Page!
A computer network, often simply referred to as a network, is a collection of hardware components and computers interconnected by communication channels that allow sharing of resources and information.
Networks may be classified according to a wide variety of characteristics such as the medium used to transport the data, communications protocol used, scale, topology, and organizational scope.
The rules and data formats for exchanging information in a computer network are defined by communications protocols. Well-known communications protocols are Ethernet, a hardware and Link Layer standard that is ubiquitous in local area networks, and the Internet Protocol Suite, which defines a set of protocols for internetworking, i.e. for data communication between multiple networks, as well as host-to-host data transfer, and application-specific data transmission formats.
Computer networking is sometimes considered a sub-discipline of electrical engineering, telecommunications, computer science, information technology or computer engineering, since it relies upon the theoretical and practical application of these disciplines.
Section word is discussed. | Top of Page!
Cross-docking is a practice in logistics of unloading materials from an incoming semi-trailer truck or railroad car and loading these materials directly into outbound trucks, trailers, or rail cars, with little or no storage in between. This may be done to change type of conveyance, to sort material intended for different destinations, or to combine material from different origins into transport vehicles (or containers) with the same, or similar destination.
Cross-Dock operations were first pioneered in the US trucking industry in the 1930s, and have been in continuous use in LTL (less than truckload) operations ever since. The US Military began utilizing cross-dock operations in the 1950s. Wal-Mart began utilizing cross-docking in the retail sector in the late 1980s.
In the LTL trucking industry, cross-docking is done by moving cargo from one transport vehicle directly into another, with minimal or no warehousing. In retail practice, cross-docking operations may utilize staging areas where inbound materials are sorted, consolidated, and stored until the outbound shipment is complete and ready to ship.
Advantages of Retail Cross-Docking
- Streamlines the supply chain from point of origin to point of sale
- Reduces handling costs, operating costs, and the storage of inventory
- Products get to the distributor and consequently to the customer faster
- Reduces, or eliminates warehousing costs
- May increase available retail sales space.
Disadvantages of Cross-Docking
Section word is discussed. | Top of Page!
- Potential partners don't have necessary storage-capacities
- or an adequate transport fleet to operate Cross-Docking
- Need of adequate IT-System
A debit card (also known as a bank card or check card) is a plastic card that provides the cardholder electronic access to his or her bank account/s at a financial institution. Some cards have a stored value with which a payment is made, while most relay a message to the cardholder's bank to withdraw funds from a designated account in favor of the payee's designated bank account.
The card can be used as an alternative payment method to cash when making purchases. In some cases, the cards are designed exclusively for use on the Internet, and so there is no physical card.
In many countries the use of debit cards has become so widespread that their volume of use has overtaken or entirely replaced the check and, in some instances, cash transactions. Like credit cards, debit cards are used widely for telephone and Internet purchases.
However, unlike credit cards, the funds paid using a debit card are transferred immediately from the bearer's bank account, instead of having the bearer pay back the money at a later date.
Debit cards usually also allow for instant withdrawal of cash, acting as the ATM card for withdrawing cash and as a check guarantee card. Merchants may also offer cashback facilities to customers, where a customer can withdraw cash along with their purchase.
Section word is discussed. | Top of Page!
Direct shipment is a method of delivering goods from the supplier or the product owner to the customer directly. In most of the cases, the customer orders the goods from the product owner. This delivery scheme reduces transportation and storage costs, but require additional planning and administration.
Direct shipment is a common practice in industries that rely on outsourced manufacturing to a big extent, like high-tech, apparel and footwear as well as durables industries.
Section word is discussed. | Top of Page!
Electronic business, commonly referred to as "eBusiness" or "e-business", or an internet business, may be defined as the application of information and communication technologies (ICT) in support of all the activities of business.
Commerce constitutes the exchange of products and services between businesses, groups and individuals and can be seen as one of the essential activities of any business. Electronic commerce focuses on the use of ICT to enable the external activities and relationships of the business with individuals, groups and other businesses.
The term "e-business" was coined by IBM's marketing and Internet teams in 1996.
Electronic business methods enable companies to link their internal and external data processing systems more efficiently and flexibly, to work more closely with suppliers and partners, and to better satisfy the needs and expectations of their customers.
In practice, e-business is more than just e-commerce. While e-business refers to more strategic focus with an emphasis on the functions that take place using electronic capabilities, e-commerce is a subset of an overall e-business strategy.
E-commerce seeks to add revenue streams using the World Wide Web or the Internet to build and enhance relationships with clients and partners and to improve efficiency using the Empty Vessel strategy. Often, e-commerce involves the application of knowledge management systems.
E-business involves business processes spanning the entire value chain: electronic purchasing and supply chain management, processing orders electronically, handling customer service, and cooperating with business partners.
Special technical standards for e-business facilitate the exchange of data between companies. E-business software solutions allow the integration of intra and inter firm business processes. E-business can be conducted using the Web, the Internet, intranets, extranets, or some combination of these.
Basically, electronic commerce (EC) is the process of buying, transferring, or exchanging products, services, and/or information via computer networks, including the internet. EC can also be beneficial from many perspectives including business process, service, learning, collaborative, community. EC is often confused with e-business.
Electronic Business Security
E-Business systems naturally have greater security risks than traditional business systems, therefore it is important for e-business systems to be fully protected against these risks. A far greater number of people have access to e-businesses through the internet than would have access to a traditional business.
Customers, suppliers, employees, and numerous other people use any particular e-business system daily and expect their confidential information to stay secure. Hackers are one of the great threats to the security of e-businesses. Some common security concerns for e-Businesses include keeping business and customer information private and confidential, authenticity of data, and data integrity.
Some of the methods of protecting e-business security and keeping information secure include physical security measures as well as data storage, data transmission, anti-virus software, firewalls, and encryption to list a few.
Key Security Concerns within E-Business
Privacy and confidentiality
Confidentiality is the extent to which businesses makes personal information available to other businesses and individuals. With any business, confidential information must remain secure and only be accessible to the intended recipient. However, this becomes even more difficult when dealing with e-businesses specifically.
To keep such information secure means protecting any electronic records and files from unauthorized access, as well as ensuring safe transmission and data storage of such information. Tools such as encryption and firewalls manage this specific concern within e-business.
E-business transactions pose greater challenges for establishing authenticity due to the ease with which electronic information may be altered and copied. Both parties in an e-business transaction want to have the assurance that the other party is who they claim to be, especially when a customer places an order and then submits a payment electronically.
One common way to ensure this is to limit access to a network or trusted parties by using a virtual private network (VPN) technology. The establishment of authenticity is even greater when a combination of techniques are used, and such techniques involve checking “something you know” (i.e. password or PIN), “something you have” (i.e. credit card), or “something you are” (i.e. digital signatures or voice recognition methods).
Many times in e-business, however, “something you are” is pretty strongly verified by checking the purchaser’s “something you have” (i.e. credit card) and “something you know” (i.e. card number).
Data integrity answers the question “Can the information be changed or corrupted in any way?” This leads to the assurance that the message received is identical to the message sent. A business needs to be confident that data is not changed in transit, whether deliberately or by accident. To help with data integrity, firewalls protect stored data against unauthorized access, while simply backing up data allows recovery should the data or equipment be damaged.
This concern deals with the existence of proof in a transaction. A business must have assurance that the receiving party or purchaser cannot deny that a transaction has occurred, and this means having sufficient evidence to prove the transaction. One way to address non-repudiation is using digital signatures.
A digital signature not only ensures that a message or document has been electronically signed by the person, but since a digital signature can only be created by one person, it also ensures that this person cannot later deny that they provided their signature.
When certain electronic resources and information is limited to only a few authorized individuals, a business and its customers must have the assurance that no one else can access the systems or information. Fortunately, there are a variety of techniques to address this concern including firewalls, access privileges, user identification and authentication techniques (such as passwords and digital certificates), Virtual Private Networks (VPN), and much more.
This concern is specifically pertinent to a business’ customers as certain information must be available when customers need it. Messages must be delivered in a reliable and timely fashion, and information must be stored and retrieved as required.
Because availability of service is important for all e-business websites, steps must be taken to prevent disruption of service by events such as power outages and damage to physical infrastructure. Examples to address this include data backup, fire-suppression systems, Uninterrupted Power Supply (UPS) systems, virus protection, as well as making sure that there is sufficient capacity to handle the demands posed by heavy network traffic.
Common Security Measures for E-Business Systems
Many different forms of security exist for e-businesses. Some general security guidelines include areas in physical security, data storage, data transmission, application development, and system administration.
Despite e-business being business done online, there are still physical security measures that can be taken to protect the business as a whole. Even though business is done online, the building that houses the servers and computers must be protected and have limited access to employees and other persons.
For example, this room should only allow authorized users to enter, and should ensure that “windows, dropped ceilings, large air ducts, and raised floors” do not allow easy access to unauthorized persons. Preferably these important items would be kept in an air-conditioned room without any windows.
Protecting against the environment is equally important in physical security as protecting against unauthorized users. The room may protect the equipment against flooding by keeping all equipment raised off of the floor. In addition, the room should contain a fire extinguisher in case of fire. The organization should have a fire plan in case this situation arises.
In addition to keeping the servers and computers safe, physical security of confidential information is important. This includes client information such as credit card numbers, checks, phone numbers, etc.
It also includes any of the organization's private information. Locking physical and electronic copies of this data in a drawer or cabinet is one additional measure of security. Doors and windows leading into this area should also be securely locked. Only employees that need to use this information as part of their job should be given keys.
Important information can also be kept secure by keeping backups of files and updating them on a regular basis. It is best to keep these backups in a separate secure location in case there is a natural disaster or breach of security at the main location.
“Failover sites” can be built in case there is a problem with the main location. This site should be just like the main location in terms of hardware, software, and security features. This site can be used in case of fire or natural disaster at the original site. It is also important to test the “failover site” to ensure it will actually work if the need arises.
State of the art security systems, such as the one used at Tidepoint's headquarters, might include access control, alarm systems, and closed-circuit television. One form of access control is face (or another feature) recognition systems. This allows only authorized personnel to enter, and also serves the purpose of convenience for employees who don't have to carry keys or cards. Cameras can also be placed throughout the building and at all points of entry. Alarm systems also serve as an added measure of protection against theft.
Storing data in a secure manner is very important to all businesses, but especially to e-businesses where most of the data is stored in an electronic manner. Data that is confidential should not be stored on the e-business' server, but instead moved to another physical machine to be stored. If possible this machine should not be directly connected to the internet, and should also be stored in a safe location. The information should be stored in an encrypted format.
Any highly sensitive information should not be stored if it is possible. If it does need to be stored, it should be kept on only a few reliable machines to prevent easy access. Extra security measures should be taken to protect this information (such as private keys) if possible.
Additionally, information should only be kept for a short period of time, and once it is no longer necessary it should be deleted to prevent it from falling into the wrong hands. Similarly, backups and copies of information should be kept secure with the same security measures as the original information. Once a backup is no longer needed, it should be carefully but thoroughly destroyed.
Data transmission and application development
All sensitive information being transmitted should be encrypted. Businesses can opt to refuse clients who can't accept this level of encryption. Confidential and sensitive information should also never be sent through e-mail. If it must be, then it should also be encrypted.
Transferring and displaying secure information should be kept to a minimum. This can be done by never displaying a full credit card number for example. Only a few of the numbers may be shown, and changes to this information can be done without displaying the full number. It should also be impossible to retrieve this information online.
Source code should also be kept in a secure location. It should not be visible to the public.
Applications and changes should be tested before they are placed online for reliability and compatibility.
Security on default operating systems should be increased immediately. Patches and software updates should be applied in a timely manner. All system configuration changes should be kept in a log and promptly updated.
System administrators should keep watch for suspicious activity within the business by inspecting log files and researching repeated logon failures. They can also audit their e-business system and look for any holes in the security measures. It is important to make sure plans for security are in place but also to test the security measures to make sure they actually work.
With the use of social engineering, the wrong people can get a hold of confidential information. To protect against this, staff can be made aware of social engineering and trained to properly deal with sensitive information.
E-businesses may use passwords for employee logons, accessing secure information, or by customers. Passwords should be made impossible to guess. They should consist of both letters and numbers, and be at least seven to eight digits long. They should not contain any names, birth dates, etc.
Passwords should be changed frequently and should be unique each time. Only the password's user should know the password and it should never be written down or stored anywhere. Users should also be locked out of the system after a certain number of failed logon attempts to prevent guessing of passwords.
When it comes to security solutions, there are some main goals that are to be met. These goals are data integrity, strong authentication, and privacy.
Access and data integrity
There are several different ways to prevent access to the data that is kept online. One way is to use anti-virus software. This is something that most people use to protect their networks regardless of the data they have. E-businesses should use this because they can then be sure that the information sent and received to their system is clean.
A second way to protect the data is to use firewalls and network protection. A firewall is used to restrict access to private networks, as well as public networks that a company may use. The firewall also has the ability to log attempts into the network and provide warnings as it is happening.
They are very beneficial to keep third-parties out of the network. Businesses that use Wi-Fi need to consider different forms of protection because these networks are easier for someone to access. They should look into protected access, virtual private networks, or internet protocol security.
Another option they have is an intrusion detection system. This system alerts when there are possible intrusions. Some companies set up traps or “hot spots” to attract people and are then able to know when someone is trying to hack into that area.
Encryption, which is actually a part of cryptography, involves transforming texts or messages into a code which is unreadable. These messages have to be decrypted in order to be understandable or usable for someone.
There is a key that identifies the data to a certain person or company. With public key encryption, there are actually two keys used. One is public and one is private. The public one is used for encryption, and the private for decryption. The level of the actual encryption can be adjusted and should be based on the information.
The key can be just a simple slide of letters or a completely random mix-up of letters. This is relatively easy to implement because there is software that a company can purchase. A company needs to be sure that their keys are registered with a certificate authority.
The point of a digital certificate is to identify the owner of a document. This way the receiver knows that it is an authentic document. Companies can use these certificates in several different ways. They can be used as a replacement for user names and passwords.
Each employee can be given these to access the documents that they need from wherever they are. These certificates also use encryption. They are a little more complicated than normal encryption however. They actually used important information within the code. They do this in order to assure authenticity of the documents as well as confidentiality and data integrity which always accompany encryption.
Digital certificates are not commonly used because they are confusing for people to implement. There can be complications when using different browsers. This means they need to use multiple certificates. The process is being adjusted so that it is easier to use.
A final way to secure information online would be to use a digital signature. If a document has a digital signature on it, no one else is able to edit the information without being detected.
That way if it is edited, it may be adjusted for reliability after the fact. In order to use a digital signature, one must use a combination of cryptography and a message digest. A message digest is used to give the document a unique value. That value is then encrypted with the sender’s private key.
Section word is discussed. | Top of Page!
Electronic Data Interchange
Electronic data interchange (EDI) is the structured transmission of data between organizations by electronic means. It is used to transfer electronic documents or business data from one computer system to another computer system, i.e. from one trading partner to another trading partner without human intervention.
It is more than mere e-mail; for instance, organizations might replace bills of lading and even cheques with appropriate EDI messages. It also refers specifically to a family of standards.
In 1996, the National Institute of Standards and Technology defined electronic data interchange as "the computer-to-computer interchange of strictly formatted messages that represent documents other than monetary instruments. EDI implies a sequence of messages between two parties, either of whom may serve as originator or recipient.
The formatted data representing the documents may be transmitted from originator to recipient via telecommunications or physically transported on electronic storage media." It distinguishes mere electronic communication or data exchange, specifying that "in EDI, the usual processing of received messages is by computer only.
Human intervention in the processing of a received message is typically intended only for error conditions, for quality review, and for special situations. For example, the transmission of binary or textual data is not EDI as defined here unless the data are treated as one or more data elements of an EDI message and are not normally intended for human interpretation as part of online data processing."
EDI can be formally defined as the transfer of structured data, by agreed message standards, from one computer system to another without human intervention.
EDI is considered to be a technical representation of a business conversation between two entities, either internal or external. Note that there is a perception that "EDI" constitutes the entire electronic data interchange paradigm, including the transmission, message flow, document format, and software used to interpret the documents. EDI is considered to describe the rigorously standardized format of electronic documents. EDI is very useful in supply chain.
The EDI standards were designed to be independent of communication and software technologies. EDI can be transmitted using any methodology agreed to by the sender and recipient. This includes a variety of technologies, including modem (asynchronous and synchronous), FTP, e-mail, HTTP, AS1, AS2, etc. It is important to differentiate between the EDI documents and the methods for transmitting them.
When they compared the synchronous protocol 2400 bit/s modems, CLEO devices, and value-added networks used to transmit EDI documents to transmitting via the Internet, some people equated the non-Internet technologies with EDI and predicted erroneously that EDI itself would be replaced along with the non-Internet technologies. These non-internet transmission methods are being replaced by Internet protocols such as FTP, telnet, and e-mail, but the EDI documents themselves still remain.
As more trading partners use the Internet for transmission, standards have emerged. In 2002, the IETF published RFC 3335, offering a standardized, secure method of transferring EDI data via e-mail. On July 12, 2005, an IETF working group ratified RFC4130 for MIME-based HTTP EDIINT (a.k.a. AS2) transfers, and is preparing a similar RFC for FTP transfers (a.k.a. AS3). While some EDI transmission has moved to these newer protocols, the providers of the value-added networks remain active.
EDI documents generally contain the same information that would normally be found in a paper document used for the same organizational function. For example an EDI 940 ship-from-warehouse order is used by a manufacturer to tell a warehouse to ship product to a retailer. It typically has a ship to address, bill to address, a list of product numbers (usually a UPC) and quantities.
Another example is the set of messages between sellers and buyers, such as request for quotation (RFQ), bid in response to RFQ, purchase order, purchase order acknowledgment, shipping notice, receiving advice, invoice, and payment advice. However, EDI is not confined to just business data related to trade but encompasses all fields such as medicine (e.g., patient records and laboratory results), transport (e.g., container and modal information), engineering and construction, etc.
In some cases, EDI will be used to create a new business information flow (that was not a paper flow before). This is the case in the Advanced Shipment Notification (856) which was designed to inform the receiver of a shipment, the goods to be received and how the goods are packaged.
Some major sets of EDI standards:
- The UN-recommended UN/EDIFACT is the only international standard and is predominant outside of North America.
- The US standard ANSI ASC X12 (X12) is predominant in North America.
- The TRADACOMS standard developed by the ANA (Article Numbering Association) is predominant in the UK retail industry.
- The ODETTE standard used within the European automotive industry
All of these standards first appeared in the early to mid 1980s. The standards prescribe the formats, character sets, and data elements used in the exchange of business documents and forms. The complete X12 Document List includes all major business documents, including purchase orders (called "ORDERS" in UN/EDIFACT and an "850" in X12) and invoices (called "INVOIC" in UN/EDIFACT and an "810" in X12).
The EDI standard says which pieces of information are mandatory for a particular document, which pieces are optional and give the rules for the structure of the document. The standards are like building codes. Just as two kitchens can be built "to code" but look completely different, two EDI documents can follow the same standard and contain different sets of information. For example a food company may indicate a product's expiration date while a clothing manufacturer would choose to send color and size information.
Organizations that send or receive documents between each other are referred to as "trading partners" in EDI terminology. The trading partners agree on the specific information to be transmitted and how it should be used. This is done in human readable specifications (also called Message Implementation Guidelines). While the standards are analogous to building codes, the specifications are analogous to blue prints.
(The specification may also be called a "mapping," but the term mapping is typically reserved for specific machine-readable instructions given to the translation software.) Larger trading "hubs" have existing Message Implementation Guidelines which mirror their business processes for processing EDI and they are usually unwilling to modify their EDI business practices to meet the needs of their trading partners.
Often in a large company these EDI guidelines will be written to be generic enough to be used by different branches or divisions and therefore will contain information not needed for a particular business document exchange. For other large companies, they may create separate EDI guidelines for each branch/division.
Trading partners are free to use any method for the transmission of documents. In the past one of the more popular methods was the usage of a bisync modem to communicate through a value added network (VAN). Some organizations have used direct modem to modem connections and bulletin board systems (BBS), and recently there has been a move towards using some of the many Internet protocols for transmission, but most EDI is still transmitted using a VAN. In the healthcare industry, a VAN is referred to as a "clearinghouse".
In the most basic form, a VAN (value-added network) acts as a regional post office. They receive transactions, examine the 'from' and the 'to' information, and route the transaction to the final recipient. VANs provide a number of additional services, e.g. retransmitting documents, providing third party audit information, acting as a gateway for different transmission methods, and handling telecommunications support.
Because of these and other services VANs provide, businesses frequently use a VAN even when both trading partners are using Internet-based protocols. Healthcare clearinghouses perform many of the same functions as a VAN, but have additional legal restrictions that govern VANs also provide an advantage with certificate replacement in AS2 transmissions.
Because each node in a traditionally business-related AS2 transmission usually involves a security certificate, routing a large number of partners through a VAN can make certificate replacement much easier.
- Value-added networks are the go-between in EDI communications.
- The VAN is responsible for routing, storing and delivering EDI messages. They also provide delivery reports
- Depending on the VAN type, messages may need extra envelopes or may be routed using intelligent *VANs which are able to read the EDI message itself.
- VANs may be operated by various entities:
- telecom companies;
- industry group consortia;
- a large company interacting with its suppliers/vendors.
Until recently, the Internet transmission was handled by nonstandard methods between trading partners usually involving FTP or email attachments. There are also standards for embedding EDI documents into XML. Many organizations are migrating to this protocol to reduce costs. For example, Wal-Mart is now requiring its trading partners to switch to the AS2 protocol.
AS2 (Applicability Statement 2) is the draft specification standard by which vendor applications communicate EDI or other business-to-business data (such as XML) over the Internet using HTTP, a standard used by the World Wide Web. AS2 provides security for the transport payload through digital signatures and data encryption, and ensures reliable, non-reputable delivery through the use of receipts.
EDI translation software provides the interface between internal systems and the EDI format sent/received. For an "inbound" document the EDI solution will receive the file (either via a Value Added Network or directly using protocols such as FTP or AS2), take the received EDI file (commonly referred to as a "mailbag"), validate that the trading partner who is sending the file is a valid trading partner, that the structure of the file meets the EDI standards, and that the individual fields of information conform to the agreed upon standards.
Typically the translator will either create a file of either fixed length, variable length or XML tagged format or "print" the received EDI document (for non-integrated EDI environments). The next step is to convert/transform the file that the translator creates into a format that can be imported into a company's back-end business systems or ERP.
This can be accomplished by using a custom program, an integrated proprietary "mapper" or to use an integrated standards based graphical "mapper" using a standard data transformation language such as XSLT. The final step is to import the transformed file (or database) into the company's back-end enterprise resource planning (ERP) system.
For an "outbound" document the process for integrated EDI is to export a file (or read a database) from a company's back-end ERP, transform the file to the appropriate format for the translator. The translation software will then "validate" the EDI file sent to ensure that it meets the standard agreed upon by the trading partners, convert the file into "EDI" format (adding in the appropriate identifiers and control structures) and send the file to the trading partner (using the appropriate communications protocol).
Another critical component of any EDI translation software is a complete "audit" of all the steps to move business documents between trading partners. The audit ensures that any transaction (which in reality is a business document) can be tracked to ensure that they are not lost. In case of a retailer sending a Purchase Order to a supplier, if the Purchase Order is "lost" anywhere in the business process, the effect is devastating to both businesses.
To the supplier, they do not fulfill the order as they have not received it thereby losing business and damaging the business relationship with their retail client. For the retailer, they have a stock outage and the effects are lost sales, reduced customer service and ultimately lower profits.
In EDI terminology "inbound" and "outbound" refer to the direction of transmission of an EDI document in relation to a particular system, not the direction of merchandise, money or other things represented by the document. For example, an EDI document that tells a warehouse to perform an outbound shipment is an inbound document in relation to the warehouse computer system. It is an outbound document in relation to the manufacturer or dealer that transmitted the document.
Advantages over paper systems
EDI and other similar technologies save a company money by providing an alternative to, or replacing information flows that require a great deal of human interaction and materials such as paper documents, meetings, faxes, etc. Even when paper documents are maintained in parallel with EDI exchange, e.g. printed shipping manifests, electronic exchange and the use of data from that exchange reduces the handling costs of sorting, distributing, organizing, and searching paper documents.
EDI and similar technologies allow a company to take advantage of the benefits of storing and manipulating data electronically without the cost of manual entry. Another advantage of EDI is reduced errors, such as shipping and billing errors, because EDI eliminates the need to rekey documents on the destination side.
One very important advantage of EDI over paper documents is the speed in which the trading partner receives and incorporates the information into their system thus greatly reducing cycle times. For this reason, EDI can be an important component of just-in-time production systems.
According to the 2008 Aberdeen report "A Comparison of Supplier Enablement around the World", only 34% of purchase orders are transmitted electronically in North America. In EMEA, 36% of orders are transmitted electronically and in APAC, 41% of orders are transmitted electronically. They also report that the average paper requisition to order costs a company $37.45 in North America, $42.90 in EMEA and $23.90 in APAC.
With an EDI requisition to order costs are reduced to $23.83 in North America, $34.05 in EMEA and $14.78 in APAC.
Barriers to implementation
There are a few barriers to adopting electronic data interchange. One of the most significant barriers is the accompanying business process change. Existing business processes built around slow paper handling may not be suited for EDI and would require changes to accommodate automated processing of business documents.
For example, a business may receive the bulk of their goods by 1 or 2 day shipping and all of their invoices by mail. The existing process may therefore assume that goods are typically received before the invoice. With EDI, the invoice will typically be sent when the goods ship and will therefore require a process that handles large numbers of invoices whose corresponding goods have not yet been received.
Another significant barrier is the cost in time and money in the initial set-up. The preliminary expenses and time that arise from the implementation, customization and training can be costly and therefore may discourage some businesses. The key is to determine what method of integration is right for the company which will determine the cost of implementation.
For a business that only receives one P.O. per year from a client, fully integrated EDI may not make economic sense. In this case, businesses may implement inexpensive "rip and read" solutions or use outsourced EDI solutions provided by EDI "Service Bureaus". For other businesses, the implementation of an integrated EDI solution may be necessary as increases in trading volumes brought on by EDI force them to re-implement their order processing business processes.
The key hindrance to a successful implementation of EDI is the perception many businesses have of the nature of EDI. Many view EDI from the technical perspective that EDI is a data format; it would be more accurate to take the business view that EDI is a system for exchanging business documents with external entities, and integrating the data from those documents into the company's internal systems.
Successful implementations of EDI take into account the effect externally generated information will have on their internal systems and validate the business information received. For example, allowing a supplier to update a retailer's Accounts Payables system without appropriate checks and balances would be a recipe for disaster. Businesses new to the implementation of EDI should take pains to avoid such pitfalls.
Increased efficiency and cost savings drive the adoption of EDI for most trading partners.
Section word is discussed. | Top of Page!
Electronic Funds Transfer
Electronic funds transfer or EFT is the electronic exchange or transfer of money from one account to another, either within a single financial institution or across multiple institutions, through computer-based systems.
The term covers a number of different concepts:
- Cardholder-initiated transactions, where a cardholder makes use of a payment card
- Direct deposit payroll payments for a business to its employees, possibly via a payroll service bureau
- Direct debit payments, sometimes called electronic checks, for which a business debits the consumer's bank accounts for payment for goods or services
- Electronic bill payment in online banking, which may be delivered by EFT or paper check
- Transactions involving stored value of electronic money, possibly in a private currency
- Wire transfer via an international banking network (carries a higher fee in North America)
- Electronic Benefit Transfer
In 1978 the U.S. Congress passed the Electronic Funds Transfer Act to establish the rights and liabilities of consumers as well as the responsibilities of all participants in EFT activities in the United States. ( Below are examples)
Section word is discussed. | Top of Page!
Electronic mail, commonly called e-mail or email, is a method of exchanging digital messages from an author to one or more recipients. Modern email operates across the Internet or other computer networks.
Some early email systems required that the author and the recipient both be online at the same time, in common with instant messaging. Today's email systems are based on a store-and-forward model. Email servers accept, forward, deliver and store messages. Neither the users nor their computers are required to be online simultaneously; they need connect only briefly, typically to an email server, for as long as it takes to send or receive messages.
An email message consists of three components, the message envelope, the message header, and the message body. The message header contains control information, including, minimally, an originator's email address and one or more recipient addresses. Usually descriptive information is also added, such as a subject header field and a message submission date/time stamp.
Originally a text-only (7-bit ASCII and others) communications medium, email was extended to carry multi-media content attachments, a process standardized in RFC 2045 through 2049. Collectively, these RFCs have come to be called Multipurpose Internet Mail Extensions (MIME).
Electronic mail predates the inception of the Internet, and was in fact a crucial tool in creating it, but the history of modern, global Internet email services reaches back to the early ARPANET. Standards for encoding email messages were proposed as early as 1973 (RFC 561). Conversion from ARPANET to the Internet in the early 1980s produced the core of the current services. An email sent in the early 1970s looks quite similar to a basic text message sent on the Internet today.
Network-based email was initially exchanged on the ARPANET in extensions to the File Transfer Protocol (FTP), but is now carried by the Simple Mail Transfer Protocol (SMTP), first published as Internet standard 10 (RFC 821) in 1982. In the process of transporting email messages between systems, SMTP communicates delivery parameters using a message envelope separate from the message (header and body) itself.
Electronic mail has several English spelling options that occasionally prove cause for surprisingly vehement disagreement.
- email is the form required by IETF Requests for Comment and working groups and increasingly by style guides. This spelling also appears in most dictionaries.
- e-mail is a form previously recommended by some prominent journalistic and technical style guides. According to Corpus of Contemporary American English data, this form appears most frequently in edited, published American English writing.
- mail was the form used in the original RFC. The service is referred to as mail and a single piece of electronic mail is called a message.
- eMail, capitalizing only the letter M, was common among ARPANET users and the early developers of Unix, CMS, AppleLink, eWorld, AOL, GEnie, and Hotmail.
- EMail is a traditional form that has been used in RFCs for the "Author's Address", and is expressly required "for historical reasons".
- E-mail is sometimes used, capitalizing the initial letter E as in similar abbreviations like A-bomb, H-bomb, and C-section.
There is also some variety in the plural form of the term. In US English email is used as a mass noun (like the term mail for items sent through the postal system), but in British English it is more commonly used as a count noun with the plural emails.
Sending text messages electronically could be said to date back to the Morse code telegraph of the mid 1800s; and the 1939 New York World's Fair, where IBM sent a letter of congratulations from San Francisco to New York on an IBM radio-type, calling it a high-speed substitute for mail service in the world of tomorrow. Teleprinters were used in Germany during World War II, and use spread until the late 1960s when there was a worldwide Telex network. Additionally, there was the similar but incompatible American TWX, which remained important until the late 1980s.
Host-based mail systems
With the introduction of MIT's Compatible Time-Sharing System (CTSS) in 1961 for the first time multiple users were able to log into a central system from remote dial-up terminals, and to store, and share, files on the central disk.
Informal methods of using this to pass messages developed—and were expanded to create the first true email system:
MIT's CTSS MAIL, in 1965.
Other early time-sharing system soon had their own email applications:
1972 - Unix mail program
1972 - APL Mailbox by Larry Breed
1981 - PROFS by IBM
1982 - ALL-IN-1 by Digital Equipment Corporation
Although similar in concept, all these original email systems had widely different features and ran on incompatible systems. They allowed communication only between users logged into the same host or "mainframe" - although this could be hundreds or even thousands of users within an organization.
Soon systems were developed to link compatible mail programs between different organizations over dialup modems or leased lines, creating local and global networks.
- In 1971 the first ARPANET email was sent, and through RFC 561, RFC 680, RFC 724—and finally 1977's RFC 733, became a standardized working system.
Other separate networks were also being created including:
- Unix mail was networked by 1978's uucp, which was also used for USENET newsgroup postings
- IBM mainframe email was linked by BITNET in 1981
- IBM PC's running DOS in 1984 could link with FidoNet for email and shared bulletin board posting
LAN email systems
In the early 1980s, networked personal computers on LANs became increasingly important. Server-based systems similar to the earlier mainframe systems were developed. Again these systems initially allowed communication only between users logged into the same server infrastructure. Examples include:
- WordPerfect Office
- Microsoft Mail
- Banyan VINES
- Lotus Notes
Eventually these systems too could also be linked between different organizations, as long as they ran the same email system and proprietary protocol.
Attempts at interoperability
Early interoperability among independent systems included:
- ARPANET, the forerunner of today's Internet, defined the first protocols for dissimilar computers to exchange email
- uucp implementations for non-Unix systems were used as an open "glue" between differing mail systems, primarily over dialup telephones
- CSNet used dial-up telephone access to link additional sites to the ARPANET and then Internet
Later efforts at interoperability standardization included:
- Novell briefly championed the open MHS protocol but abandoned it after purchasing the non-MHS WordPerfect Office (renamed Groupwise)
- The Coloured Book protocols on UK academic networks until 1992
- X.400 in the 1980s and early 1990s was promoted by major vendors and mandated for government use under GOSIP but abandoned by all but a few — in favor of Internet SMTP by the mid-1990s.
From SNDMSG to MSG
In the early 1970s, Ray Tomlinson updated an existing utility called SNDMSG so that it could copy messages (as files) over the network. Lawrence Roberts, the project manager for the ARPANET development, took the idea of READMAIL, which dumped all "recent" messages onto the user's terminal, and wrote a program for TENEX in TECO macros called RD which permitted accessing individual messages. Barry Wessler then updated RD and called it NRD.
Marty Yonke combined rewrote NRD to include reading, access to SNDMSG for sending, and a help system, and called the utility WRD which was later known as BANANARD. John Vittal then updated this version to include 3 important commands: Move (combined save/delete command), Answer (determined to whom a reply should be sent) and Forward (send an email to a person who was not already a recipient). The system was called MSG.
With inclusion of these features, MSG is considered to be the first integrated modern email program, from which many other applications have descended.
The rise of ARPANET mail
The ARPANET computer network made a large contribution to the development of email. There is one report that indicates experimental inter-system email transfers began shortly after its creation in 1969.
Ray Tomlinson is generally credited as having sent the first email across a network, initiating the use of the "@" sign to separate the names of the user and the user's machine in 1971, when he sent a message from one Digital Equipment Corporation DEC-10 computer to another DEC-10. The two machines were placed next to each other.
Tomlinson's work was quickly adopted across the ARPANET, which significantly increased the popularity of email. For many years, email was the killer app of the ARPANET and then the Internet.
Most other networks had their own email protocols and address formats; as the influence of the ARPANET and later the Internet grew, central sites often hosted email gateways that passed mail between the Internet and these other networks. Internet email addressing is still complicated by the need to handle mail destined for these older networks.
Some well-known examples of these were UUCP (mostly Unix computers), BITNET (mostly IBM and VAX mainframes at universities), FidoNet (personal computers), DECNET (various networks) and CSNET a forerunner of NSFNet.
An example of an Internet email address that routed mail to a user at a UUCP host:
This was necessary because in early years UUCP computers did not maintain (and could not consult central servers for) information about the location of all hosts they exchanged mail with, but rather only knew how to communicate with a few network neighbors; email messages (and other data such as Usenet News) were passed along in a chain among hosts who had explicitly agreed to share data with each other. (Eventually the UUCP Mapping Project would provide a form of network routing database for email.)
The Internet email message format is defined in RFC 5322, with multi-media content attachments being defined in RFC 2045 through RFC 2049, collectively called Multipurpose Internet Mail Extensions or MIME. Prior to the introduction of RFC 2822 in 2001, the format described by RFC 822 was the standard for Internet email for nearly 20 years. RFC 822 was published in 1982 and based on the earlier RFC 733 for the ARPANET (see).
Internet email messages consist of two major sections:
- Header — Structured into fields such as From, To, CC, Subject, Date, and other information about the email.
- Body — The basic content, as unstructured text; sometimes containing a signature block at the end. This is exactly the same as the body of a regular letter.
The header is separated from the body by a blank line.
Each message has exactly one header, which is structured into fields. Each field has a name and a value. RFC 5322 specifies the precise syntax.
Informally, each line of text in the header that begins with a printable character begins a separate field. The field name starts in the first character of the line and ends before the separator character ":".
The separator is then followed by the field value (the "body" of the field). The value is continued onto subsequent lines if those lines have a space or tab as their first character. Field names and values are restricted to 7-bit ASCII characters. Non-ASCII values may be represented using MIME encoded words.
The message header must include at least the following fields:
- From: The email address, and optionally the name of the author(s). In many email clients not changeable except through changing account settings.
- Date: The local time and date when the message was written. Like the From: field, many email clients fill this in automatically when sending. The recipient's client may then display the time in the format and time zone local to him/her.
The message header should include at least the following fields:
- Message-ID: Also an automatically generated field; used to prevent multiple delivery and for reference in In-Reply-To: (see below).
- In-Reply-To: Message-ID of the message that this is a reply to. Used to link related messages together. This field only applies for reply messages.
RFC 3864 describes registration procedures for message header fields at the IANA; it provides for permanent and provisional message header field names, including also fields defined for MIME, netnews, and http, and referencing relevant RFCs. Common header fields for email include:
- To: The email address(es), and optionally name(s) of the message's recipient(s). Indicates primary recipients (multiple allowed), for secondary recipients see Cc: and Bcc: below.
- Subject: A brief summary of the topic of the message. Certain abbreviations are commonly used in the subject, including "RE:" and "FW:".
- Bcc: Blind Carbon Copy; addresses added to the SMTP delivery list but not (usually) listed in the message data, remaining invisible to other recipients.
- Cc: Carbon copy; Many email clients will mark email in your inbox differently depending on whether you are in the To: or Cc: list.
- Content-Type: Information about how the message is to be displayed, usually a MIME type.
- Precedence: commonly with values "bulk", "junk", or "list"; used to indicate that automated "vacation" or "out of office" responses should not be returned for this mail, e.g. to prevent vacation notices from being sent to all other subscribers of a mailinglist. Sendmail uses this header to affect prioritization of queued email, with "Precedence: special-delivery" messages delivered sooner. With modern high-bandwidth networks delivery priority is less of an issue than it once was. Microsoft Exchange respects a fine-grained automatic response suppression mechanism, the X-Auto-Response-Suppress header.
- Received: Tracking information generated by mail servers that have previously handled a message, in reverse order (last handler first).
- References: Message-ID of the message that this is a reply to, and the message-id of the message the previous reply was a reply to, etc.
- Reply-To: Address that should be used to reply to the message.
- Sender: Address of the actual sender acting on behalf of the author listed in the From: field (secretary, list manager, etc.).
- Archived-At: A direct link to the archived form of an individual email message.
Note that the To: field is not necessarily related to the addresses to which the message is delivered. The actual delivery list is supplied separately to the transport protocol, SMTP, which may or may not originally have been extracted from the header content. The "To:" field is similar to the addressing at the top of a conventional letter which is delivered according to the address on the outer envelope.
Also note that the "From:" field does not have to be the real sender of the email message. One reason is that it is very easy to fake the "From:" field and let a message seem to be from any mail address. It is possible to digitally sign email, which is much harder to fake, but such signatures require extra programming and often external programs to verify.
Some ISPs do not relay email claiming to come from a domain not hosted by them, but very few (if any) check to make sure that the person or even email address named in the "From:" field is the one associated with the connection. Some ISPs apply email authentication systems to email being sent through their MTA to allow other MTAs to detect forged spam that might appear to come from them.
Recently the IETF EAI working group has defined some experimental extensions to allow Unicode characters to be used within the header. In particular, this allows email addresses to use non-ASCII characters. Such characters must only be used by servers that support these extensions.
Email was originally designed for 7-bit ASCII. Much email software is 8-bit clean but must assume it will communicate with 7-bit servers and mail readers. The MIME standard introduced character set specifiers and two content transfer encodings to enable transmission of non-ASCII data: quoted printable for mostly 7 bit content with a few characters outside that range and base64 for arbitrary binary data.
The 8BITMIME and BINARY extensions were introduced to allow transmission of mail without the need for these encodings, but many mail transport agents still do not support them fully. In some countries, several encoding schemes coexist; as the result, by default, the message in a non-Latin alphabet language appears in non-readable form (the only exception is coincidence, when the sender and receiver use the same encoding scheme). Therefore, for international character sets, Unicode is growing in popularity.
Plain text and HTML
Most modern graphic email clients allow the use of either plain text or HTML for the message body at the option of the user. HTML email messages often include an automatically generated plain text copy as well, for compatibility reasons.
Advantages of HTML include the ability to include in-line links and images, set apart previous messages in block quotes, wrap naturally on any display, use emphasis such as underlines and italics, and change font styles. Disadvantages include the increased size of the email, privacy concerns about web bugs, abuse of HTML email as a vector for phishing attacks and the spread of malicious software.
Some web based Mailing lists recommend that all posts be made in plain-text, with 72 or 80 characters per line for all the above reasons, but also because they have a significant number of readers using text-based email clients such as Mutt.
Some Microsoft email clients allow rich formatting using RTF, but unless the recipient is guaranteed to have a compatible email client this should be avoided.
In order to ensure that HTML sent in an email is rendered properly by the recipient's client software, an additional header must be specified when sending: "Content-type: text/html". Most email programs send this header automatically.
Servers and client applications
Messages are exchanged between hosts using the Simple Mail Transfer Protocol with software programs called mail transfer agents (MTAs); and delivered to a mail store by programs called mail delivery agents (MDAs, also sometimes called local delivery agents, LDAs).
Users can retrieve their messages from servers using standard protocols such as POP or IMAP, or, as is more likely in a large corporate environment, with a proprietary protocol specific to Novell Groupwise, Lotus Notes or Microsoft Exchange Servers. Webmail interfaces allow users to access their mail with any standard web browser, from any computer, rather than relying on an email client. Programs used by users for retrieving, reading, and managing email are called mail user agents (MUAs).
Mail can be stored on the client, on the server side, or in both places. Standard formats for mailboxes include Maildir and mbox. Several prominent email clients use their own proprietary format and require conversion software to transfer email between them. Server-side storage is often in a proprietary format but since access is through a standard protocol such as IMAP, moving email from one server to another can be done with any MUA supporting the protocol.
Accepting a message obliges an MTA to deliver it, and when a message cannot be delivered, that MTA must send a bounce message back to the sender, indicating the problem.
Upon reception of email messages, email client applications save message in operating system files in the file-system. Some clients save individual messages as separate files, while others use various database formats, often proprietary, for collective storage. A historical standard of storage is the mbox format. The specific format used is often indicated by special filename extensions: - eml
Used by many email clients including Microsoft Outlook Express, Windows Mail and Mozilla Thunderbird. The files are plain text in MIME format, containing the email header as well as the message contents and attachments in one or more of several formats. – mlx
Used by Apple Mail. – msg
Used by Microsoft Office Outlook and OfficeLogic Groupware. – mbx
Used by Opera Mail, KMail, and Apple Mail based on the mbox format.
Some applications (like Apple Mail) leave attachments encoded in messages for searching while also saving separate copies of the attachments. Others separate attachments from messages and save them in a specific directory.
URI scheme mailto:
The URI scheme, as registered with the IANA, defines the mailto: scheme for SMTP email addresses. Though its use is not strictly defined, URLs of this form are intended to be used to open the new message window of the user's mail client when the URL is activated, with the address as defined by the URL in the To: field.
There are numerous ways in which people have changed the way they communicate in the last 50 years; email is certainly one of them. Traditionally, social interaction in the local community was the basis for communication – face to face.
Yet, today face-to-face meetings are no longer the primary way to communicate as one can use a landline telephone, mobile phones, fax services, or any number of the computer mediated communications such as email.
Flaming occurs when a person sends a message with angry or antagonistic content. The term is derived from the use of the word Incendiary to describe particularly heated email discussions. Flaming is assumed to be more common today because of the ease and impersonality of email communications: confrontations in person or via telephone require direct interaction, where social norms encourage civility, whereas typing a message to another person is an indirect interaction, so civility may be forgotten. Flaming is generally looked down upon by Internet communities as it is considered rude and non-productive.
Also known as "email fatigue", email bankruptcy is when a user ignores a large number of email messages after falling behind in reading and answering them. The reason for falling behind is often due to information overload and a general sense there is so much information that it is not possible to read it all.
As a solution, people occasionally send a boilerplate message explaining that the email inbox is being cleared out. Harvard University law professor Lawrence Lessig is credited with coining this term, but he may only have popularized it.
Email was widely accepted by the business community as the first broad electronic communication medium and was the first ‘e-revolution’ in business communication. Email is very simple to understand and like postal mail, email solves two basic problems of communication: logistics and synchronization (see below).
LAN based email is also an emerging form of usage for business. It not only allows the business user to download mail when offline, it also allows the small business user to have multiple users' email IDs with just one email connection.
- The problem of logistics: Much of the business world relies upon communications between people who are not physically in the same building, area or even country; setting up and attending an in-person meeting, telephone call, or conference call can be inconvenient, time-consuming, and costly. Email provides a way to exchange information between two or more people with no set-up costs and that is generally far less expensive than physical meetings or phone calls.
- The problem of synchronisation: With real time communication by meetings or phone calls, participants have to work on the same schedule, and each participant must spend the same amount of time in the meeting or call. Email allows asynchrony: each participant may control their schedule independently.
Most business workers today spend from one to two hours of their working day on email: reading, ordering, sorting, ‘re-contextualizing’ fragmented information, and writing email. The use of email is increasing due to increasing levels of globalization—labour division and outsourcing amongst other things. Email can lead to some well-known problems:
- Loss of context: which means that the context is lost forever; there is no way to get the text back. Information in context (as in a newspaper) is much easier and faster to understand than unedited and sometimes unrelated fragments of information. Communicating in context can only be achieved when both parties have a full understanding of the context and issue in question.
- Information overload: Email is a push technology—the sender controls who receives the information. Convenient availability of mailing lists and use of "copy all" can lead to people receiving unwanted or irrelevant information of no use to them.
- Inconsistency: Email can duplicate information. This can be a problem when a large team is working on documents and information while not in constant contact with the other members of their team.
- Liability. Statements made in an email can be deemed legally binding and be used against a party in a Court of law.
Despite these disadvantages, email has become the most widely used medium of communication within the business world. In fact, a 2010 study on workplace communication, found that 83% of U.S. knowledge workers felt that email was critical to their success and productivity at work.
Attachment size limitation
Email messages may have one or more attachments. Attachments serve the purpose of delivering binary or text files of unspecified size. In principle there is no technical intrinsic restriction in the SMTP protocol limiting the size or number of attachments. In practice, however, email service providers implement various limitations on the permissible size of files or the size of an entire message.
Furthermore, due to technical reasons, often a small attachment can increase in size when sent, which can be confusing to senders when trying to assess whether they can or cannot send a file by email, and this can result in their message being rejected.
As larger and larger file sizes are being created and traded, many users are either forced to upload and download their files using an FTP server, or more popularly, use online file sharing facilities or services, usually over web-friendly HTTP, in order to send and receive them.
A December 2007 New York Times blog post described information overload as "a $650 Billion Drag on the Economy", and the New York Times reported in April 2008 that "E-MAIL has become the bane of some people’s professional lives" due to information overload, yet "none of the current wave of high-profile Internet start-ups focused on e-mail really eliminates the problem of e-mail overload because none helps us prepare replies".
GigaOm posted a similar article in September 2010, highlighting research that found 57% of knowledge workers were overwhelmed by the volume of email they received. Technology investors reflect similar concerns.
In October 2010, CNN published an article titled "Happy Information Overload Day" that compiled research on email overload from IT companies and productivity experts. According to Basex, the average knowledge worker receives 93 emails a day. Subsequent studies have reported higher numbers.
Marsha Egan, an email productivity expert, called email technology both a blessing and a curse in the article. She stated, "Everyone just learns that they have to have it dinging and flashing and open just in case the boss e-mails," she said. "The best gift any group can give each other is to never use e-mail urgently. If you need it within three hours, pick up the phone."
Spamming and computer viruses
The usefulness of email is being threatened by four phenomena: email bombardment, spamming, phishing, and email worms.
Spamming is unsolicited commercial (or bulk) email. Because of the very low cost of sending email, spammers can send hundreds of millions of email messages each day over an inexpensive Internet connection. Hundreds of active spammers sending this volume of mail results in information overload for many computer users who receive voluminous unsolicited email each day.
Email worms use email as a way of replicating themselves into vulnerable computers. Although the first email worm affected UNIX computers, the problem is most common today on the more popular Microsoft Windows operating system.
The combination of spam and worm programs results in users receiving a constant drizzle of junk email, which reduces the usefulness of email as a practical tool.
A number of anti-spam techniques mitigate the impact of spam. In the United States, U.S. Congress has also passed a law, the Can Spam Act of 2003, attempting to regulate such email.
Australia also has very strict spam laws restricting the sending of spam from an Australian ISP, but its impact has been minimal since most spam comes from regimes that seem reluctant to regulate the sending of spam.
Email spoofing occurs when the header information of an email is altered to make the message appear to come from a known or trusted source. It is often used as a ruse to collect personal information.
Email bombing is the intentional sending of large volumes of messages to a target address. The overloading of the target email address can render it unusable and can even cause the mail server to crash.
Today it can be important to distinguish between Internet and internal email systems. Internet email may travel and be stored on networks and computers without the sender's or the recipient's control.
During the transit time it is possible that third parties read or even modify the content. Internal mail systems, in which the information never leaves the organizational network, may be more secure, although information technology personnel and others whose function may involve monitoring or managing may be accessing the email of other employees.
Email privacy, without some security precautions, can be compromised because:
- email messages are generally not encrypted.
- email messages have to go through intermediate computers before reaching their destination, meaning it is relatively easy for others to intercept and read messages.
- many Internet Service Providers (ISP) store copies of email messages on their mail servers before they are delivered. The backups of these can remain for up to several months on their server, despite deletion from the mailbox.
- the "Received:"-fields and other information in the email can often identify the sender, preventing anonymous communication.
There are cryptography applications that can serve as a remedy to one or more of the above.
For example, Virtual Private Networks or the Tor anonymity network can be used to encrypt traffic from the user machine to a safer network while GPG, PGP, SMEmail, or S/MIME can be used for end-to-end message encryption, and SMTP STARTTLS or SMTP over Transport Layer Security/Secure Sockets Layer can be used to encrypt communications for a single mail hop between the SMTP client and the SMTP server.
Additionally, many mail user agents do not protect logins and passwords, making them easy to intercept by an attacker. Encrypted authentication schemes such as SASL prevent this.
Finally, attached files share many of the same hazards as those found in peer-to-peer filesharing. Attached files may contain trojans or viruses.
Tracking of sent mail
The original SMTP mail service provides limited mechanisms for tracking a transmitted message, and none for verifying that it has been delivered or read. It requires that each mail server must either deliver it onward or return a failure notice (bounce message), but both software bugs and system failures can cause messages to be lost.
To remedy this, the IETF introduced Delivery Status Notifications (delivery receipts) and Message Disposition Notifications (return receipts); however, these are not universally deployed in production. (A complete Message Tracking mechanism was also defined, but it never gained traction; see RFCs 3885 through 3888.)
Many ISPs now deliberately disable non-delivery reports (NDRs) and delivery receipts due to the activities of spammers:
- Delivery Reports can be used to verify whether an address exists and so is available to be spammed
- If the spammer uses a forged sender email address (E-mail spoofing), then the innocent email address that was used can be flooded with NDRs from the many invalid email addresses the spammer may have attempted to mail. These NDRs then constitute spam from the ISP to the innocent user
There are a number of systems that allow the sender to see if messages have been opened. The receiver could also let the sender know that the emails have been opened through an "Okay" button. A check sign can appear in the sender's screen when the receiver's "Okay" button is pressed.
The US Government has been involved in email in several different ways.
Starting in 1977, the US Postal Service (USPS) recognized that electronic mail and electronic transactions posed a significant threat to First Class mail volumes and revenue. Therefore, the USPS initiated an experimental email service known as E-COM.
Electronic messages were transmitted to a post office, printed out, and delivered as hard copy. To take advantage of the service, an individual had to transmit at least 200 messages. The delivery time of the messages was the same as First Class mail and cost 26 cents.
Both the Postal Regulatory Commission and the Federal Communications Commission opposed E-COM. The FCC concluded that E-COM constituted common carriage under its jurisdiction and the USPS would have to file a tariff. Three years after initiating the service, USPS canceled E-COM and attempted to sell it off.
The early ARPANET dealt with multiple email clients that had various, and at times incompatible, formats. For example, in the Multics, the "@" sign meant "kill line" and anything before the "@" sign was ignored, so Multics users had to use a command-line option to specify the destination system.
The Department of Defense DARPA desired to have uniformity and interoperability for email and therefore funded efforts to drive towards unified inter-operable standards. This led to David Crocker, John Vittal, Kenneth Pogran, and Austin Henderson publishing RFC 733, "Standard for the Format of ARPA Network Text Message" (November 21, 1977), which was apparently not effective. In 1979, a meeting was held at BBN to resolve incompatibility issues.
Jon Postel recounted the meeting in RFC 808, "Summary of Computer Mail Services Meeting Held at BBN on 10 January 1979" (March 1, 1982), which includes an appendix listing the varying email systems at the time. This, in turn, lead to the release of David Crocker's RFC 822, "Standard for the Format of ARPA Internet Text Messages" (August 13, 1982).
The National Science Foundation took over operations of the ARPANET and Internet from the Department of Defense, and initiated NSFNet, a new backbone for the network. A part of the NSFNet AUP forbade commercial traffic. In 1988, Vint Cerf arranged for an interconnection of MCI Mail with NSFNET on an experimental basis. The following year Compuserve email interconnected with NSFNET. Within a few years the commercial traffic restriction was removed from NSFNETs AUP, and NSFNET was privatized.
In the late 1990s, the Federal Trade Commission grew concerned with fraud transpiring in email, and initiated a series of procedures on spam, fraud, and phishing. In 2004, FTC jurisdiction over spam was codified into law in the form of the CAN SPAM Act. Several other US Federal Agencies have also exercised jurisdiction including the Department of Justice and the Secret Service.
NASA has provided email capabilities to astronauts aboard the Space Shuttle and Internationals Space Station since 1991 when a Macintosh Portable was used aboard Space Shuttle mission STS-43 to send the first email via AppleLink. Today astronauts aboard the International Space Station have email capabilities through the via wireless networking throughout the station and are connected to the ground at 3 Mbit/s Earth to station and 10 Mbit/s station to Earth, comparable to home DSL connection speeds.
- Email encryption
- HTML email
- Internet fax
- L- or letter mail, e-mail letter and letter e-mail
- Privacy-enhanced Electronic Mail
- Push email
Email social issues
- Anti-spam techniques (e-mail)
- CompuServe (first consumer service)
- Computer virus
- E-mail art
- E-mail jamming
- E-mail spam
- E-mail spoofing
- E-mail storm
- E-mail subject abbreviations
- Information overload
- Internet humor
- Internet slang
- Usenet quoting
Clients and servers
- Email address
- Email authentication
- Email client, Comparison of email clients
- Email hosting service
- Internet mail standards
- Mail transfer agent
- Mail user agent
- Unicode and e-mail
- Anonymous remailer
- Disposable e-mail address
- Email digest
- E-mail encryption
- E-mail tracking
- Electronic mailing list
- Mailing list archive
Section word is discussed. | Top of Page!
Email spam, also known as junk email or unsolicited bulk email (UBE), is a subset of spam that involves nearly identical messages sent to numerous recipients by email. Definitions of spam usually include the aspects that email is unsolicited and sent in bulk. One subset of UBE is UCE (unsolicited commercial email). The opposite of "spam", email which one wants, is called "ham", usually when referring to a message's automated analysis (such as Bayesian filtering).
Email spam has steadily grown since the early 1990s. Botnets, networks of virus-infected computers, are used to send about 80% of spam. Since the expense of the spam is borne mostly by the recipient, it is effectively postage due advertising.
The legal status of spam varies from one jurisdiction to another. In the United States, spam was declared to be legal by the CAN-SPAM Act of 2003 provided the message adheres to certain specifications. ISPs have attempted to recover the cost of spam through lawsuits against spammers, although they have been mostly unsuccessful in collecting damages despite winning in court.
Spammers collect email addresses from chat-rooms, websites, customer lists, newsgroups, and viruses which harvest users' address books, and are sold to other spammers. They also use a practice known as "email appending" or "expending" in which they use known information about their target (such as a postal address) to search for the target's email address.
Much of spam is sent to invalid email addresses. Spam averages 78% of all email sent. According to the Message Anti-Abuse Working Group, the amount of spam email was between 88–92% of email messages sent in the first half of 2010.
From the beginning of the Internet (the ARPANET), sending of junk email has been prohibited, enforced by the Terms of Service/Acceptable Use Policy (ToS/AUP) of internet service providers (ISPs) and peer pressure.
Even with a thousand users junk email for advertising is not tenable, and with a million users it is not only impractical, but also expensive. It is estimated that spam cost businesses on the order of $100 billion in 2007. As the scale of the spam problem has grown, ISPs and the public have turned to government for relief from spam, which has failed to materialize.
VSpam has several definitions varying by source.
Unsolicited bulk email (UBE)—unsolicited email, sent in large quantities.
Unsolicited commercial email (UCE)—this more restrictive definition is used by regulators whose mandate is to regulate commerce, such as the U.S. Federal Trade Commission.
Many spam emails contain URLs to a website or websites. According to a Commtouch report in the first quarter of 2010, there are "...183 billion spam messages" sent every day. The most popular spam topic is "pharmacy ads" which make up 81% of email spam messages.
Most common products advertised
According to information compiled by Commtouch Software Ltd., email spam for the first quarter of 2010 can be broken down as follows.
EMail Spam by Topic
Pharmacy - 81%
Enhancers - 2.30%
Phishing - 2.30%
Degrees - 1.30%
Casino - 1%
Weight Loss - 0.40%
Other - 6.30%
Advance fee fraud spam such as the Nigerian "419" scam may be sent by a single individual from a cyber cafe in a developing country. Organized "spam gangs" operating from Russia or eastern Europe share many features in common with other forms of organized crime, including turf battles and revenge killings.
Spam is also a medium for fraudsters to scam users into entering personal information on fake Web sites using emails forged to look like they are from banks or other organizations, such as PayPal. This is known as phishing. Targeted phishing, where known information about the recipient is used to created forged emails, is known as spear-phishing.
If a marketer has one database containing names, addresses, and telephone numbers of prospective customers, they can pay to have their database matched against an external database containing email addresses. The company then has the means to send email to persons who have not requested email, which may include persons who have deliberately withheld their email address.
Image spam is an obfuscating method in which the text of the message is stored as a GIF or JPEG image and displayed in the email. This prevents text based spam filters from detecting and blocking spam messages. Image spam was reportedly used in the mid 2000s to advertise "pump and dump" stocks.
Often, image spam contains nonsensical, computer-generated text which simply annoys the reader. However, some new technology programs try to read the images by attempting to find text in these images. They are not very accurate, and sometimes filter out innocent images of products like a box that has words on it.
A newer technique, however, is to use an animated GIF image that does not contain clear text in its initial frame, or to contort the shapes of letters in the image (as in CAPTCHA) to avoid detection by OCR tools.
Blank spam is spam lacking a payload advertisement. Often the message body is missing altogether, as well as the subject line. Still, it fits the definition of spam because of its nature as bulk and unsolicited email.
Blank spam may be originated in different ways, either intentional or unintentionally:
- Blank spam can have been sent in a directory harvest attack, a form of dictionary attack for gathering valid addresses from an email service provider. Since the goal in such an attack is to use the bounces to separate invalid addresses from the valid ones, spammers may dispense with most elements of the header and the entire message body, and still accomplish their goals.
- Blank spam may also occur when a spammer forgets or otherwise fails to add the payload when he or she sets up the spam run.
- Often blank spam headers appear truncated, suggesting that computer glitches may have contributed to this problem—from poorly-written spam software to shoddy relay servers, or any problems that may truncate header lines from the message body.
- Some spam may appear to be blank when in fact it is not. An example of this is the VBS.Davinia.B email worm which propagates through messages that have no subject line and appears blank, when in fact it uses HTML code to download other files.
Backscatter is a side-effect of email spam, viruses and worms, where email servers receiving spam and other mail send bounce messages to an innocent party. This occurs because the original message's envelope sender is forged to contain the email address of the victim. A very large proportion of such email is sent with a forged From: header, matching the envelope sender.
Since these messages were not solicited by the recipients, are substantially similar to each other, and are delivered in bulk quantities, they qualify as unsolicited bulk email or spam. As such, systems that generate email backscatter can end up being listed on various DNSBLs and be in violation of internet service providers' Terms of Service.
Sending spam violates the Acceptable use policy (AUP) of almost all Internet service providers. Providers vary in their willingness or ability to enforce their AUP. Some actively enforce their terms and terminate spammers' accounts without warning. Some ISPs lack adequate personnel or technical skills for enforcement, while others may be reluctant to enforce restrictive terms against profitable customers.
As the recipient directly bears the cost of delivery, storage, and processing, one could regard spam as the electronic equivalent of "postage-due" junk mail. Due to the low cost of sending unsolicited email and the potential profit entailed, some believe that only strict legal enforcement can stop junk email. The Coalition Against Unsolicited Commercial Email (CAUCE) argues "Today, much of the spam volume is sent by career criminals and malicious hackers who won't stop until they're all rounded up and put in jail."
All the countries of the European Union have passed laws that specifically target spam.
Article 13 of the European Union Directive on Privacy and Electronic Communications (2002/58/EC) provides that the EU member states shall take appropriate measures to ensure that unsolicited communications for the purposes of direct marketing are not allowed either without the consent of the subscribers concerned or in respect of subscribers who do not wish to receive these communications, the choice between these options to be determined by national legislation.
In the United Kingdom, for example, unsolicited e-mails cannot be sent to an individual subscriber unless prior permission has been obtained or unless there is a previous relationship between the parties. The regulations can be enforced against an offending company or individual anywhere in the European Union.
The Information Commissioner's Office has responsibility for the enforcement of unsolicited e-mails and considers complaints about breaches. A breach of an enforcement notice is a criminal offence subject to a fine of up to £5000.
The Government of Canada has passed anti-spam legislation called the Fighting Internet and Wireless Spam Act to fight spam.
In Australia, the relevant legislation is the Spam Act 2003 which covers some types of email and phone spam, which took effect on 11 April 2004. The Spam Act provides that "Unsolicited commercial electronic messages must not be sent." Whether an email is unsolicited depends on whether you have consent.
Consent can be express or inferred. Express consent is when someone directly instructs you to send them emails, e.g. if they opt-in. Consent can also be inferred from the business relationship between the sender and recipient or if the recipient conspicuously publishes their email address in a public place (such as on a website). Penalties are up to 10,000 penalty units, or 2,000 penalty units for a person other than a body corporate.
In the United States, most states enacted anti-spam laws during the late 1990s and early 2000s. Many of these have since been pre-empted by the less restrictive CAN-SPAM Act of 2003.
Spam is legally permissible according to the CAN-SPAM Act of 2003 provided it follows certain criteria: a "truthful" subject line, no forged information in the technical headers or sender address, and other minor requirements. If the spam fails to comply with any of these requirements it is illegal. Aggravated or accelerated penalties apply if the spammer harvested the email addresses using methods described earlier.
A review of the effectiveness of CAN-SPAM in 2005 by the Federal Trade Commission (the agency charged with CAN-SPAM enforcement) stated that the amount of sexually explicit spam had significantly decreased since 2003 and the total volume had begun to level off.
Senator Conrad Burns, a principal sponsor, noted that "Enforcement is key regarding the CAN-SPAM legislation." In 2004, less than 1% of spam complied with the CAN-SPAM Act of 2003. In contrast to the FTC evaluation, many observers view the CAN-SPAM act as having failed in its purpose of reducing spam.
Legislative efforts to curb spam have been ineffective or counter-productive. For example, the CAN-SPAM Act of 2003 requires that each message include a means to "opt out" (i.e., decline future email from the same source). It is widely believed that responding to opt out requests is unwise, as this merely confirms to the spammer that they have reached an active email account.
To the extent this is true, the CAN-SPAM Act's opt-out provisions are counter-productive in two ways: first, recipients who are aware of the potential risks of opting out will decline to do so; second, attempts to opt-out will provide spammers with useful information on their targets. A 2002 study by the Center for Democracy and Technology found that about 16% of web sites tested with opt-out requests continued to spam.
Accessing privately owned computer resources without the owner's permission counts as illegal under computer crime statutes in most nations. Deliberate spreading of computer viruses is also illegal in the United States and elsewhere.
Thus, some common behaviors of spammers are criminal regardless of the legality of spamming per se. Even before the advent of laws specifically banning or regulating spamming, spammers were successfully prosecuted under computer fraud and abuse laws for wrongfully using others' computers.
The use of botnets can be perceived as theft. The spammer consumes a zombie owner's bandwidth and resources without any cost. In addition, spam is perceived as theft of services.
The receiving SMTP servers consume significant amounts of system resources dealing with this unwanted traffic. As a result, service providers have to spend large amounts of money to make their systems capable of handling these amounts of email. Such costs are inevitably passed on to the service providers' customers.
Other laws, not only those related to spam, have been used to prosecute alleged spammers. For example, Alan Ralsky was indicted on stock fraud charges in January 2008, and Robert Soloway plead guilty to charges of mail fraud, fraud in connection with electronic mail, and failing to file a tax return in March 2008.
Deception and fraud
Spammers may engage in deliberate fraud to send out their messages. Spammers often use false names, addresses, phone numbers, and other contact information to set up "disposable" accounts at various Internet service providers. They also often use falsified or stolen credit card numbers to pay for these accounts. This allows them to move quickly from one account to the next as the host ISPs discover and shut down each one.
Senders may go to great lengths to conceal the origin of their messages. Large companies may hire another firm to send their messages so that complaints or blocking of email falls on a third party.
Others engage in spoofing of email addresses (much easier than IP address spoofing). The email protocol (SMTP) has no authentication by default, so the spammer can pretend to originate a message apparently from any email address. To prevent this, some ISPs and domains require the use of SMTP-AUTH, allowing positive identification of the specific account from which an email originates.
Senders cannot completely spoof email delivery chains (the 'Received' header), since the receiving mailserver records the actual connection from the last mailserver's IP address. To counter this, some spammers forge additional delivery headers to make it appear as if the email had previously traversed many legitimate servers.
Spoofing can have serious consequences for legitimate email users. Not only can their email inboxes get clogged up with "undeliverable" emails in addition to volumes of spam, they can mistakenly be identified as a spammer. Not only may they receive irate email from spam victims, but (if spam victims report the email address owner to the ISP, for example) a naive ISP may terminate their service for spamming.
Theft of service
Spammers frequently seek out and make use of vulnerable third-party systems such as open mail relays and open proxy servers. SMTP forwards mail from one server to another—mail servers that ISPs run commonly require some form of authentication to ensure that the user is a customer of that ISP. Open relays, however, do not properly check who is using the mail server and pass all mail to the destination address, making it harder to track down spammers.
Increasingly, spammers use networks of malware-infected PCs (zombies) to send their spam. Zombie networks are also known as Botnets (such simplifying malware is known as a bot, short for robot). In June 2006, an estimated 80% of email spam was sent by zombie PCs, an increase of 30% from the prior year. An estimated 55 billion email spams were sent each day in June 2006, an increase of 25 billion per day from June 2005.
For Q1 2010, an estimated 305,000 newly activated zombie PCs were brought online each day for malicious activity. This number is slightly lower than the 312,000 of Q4 2009.
Brazil produced the most zombies in the first quarter of 2010. Brazil was the source of 20% of all zombies, which is down from 14% from the fourth quarter 2009. India had 10%, with Vietnam at 8%, and the Russian Federation at 7%.
To combat the problems posed by botnets, open relays and proxy servers many email server administrators are pre- emptive block dynamic IP ranges and impose stringent requirements on other servers wishing to deliver mail. Forward-confirmed reverse DNS must be correctly set for the outgoing mail server and large swaths of IP addresses are blocked sometimes are pre-emptive to prevent spam.
These measures can pose problems for people wanting to run a small email server off an inexpensive domestic connection. Blacklisting of IP ranges due to spam emanating from them also causes problems for legitimate email servers in the same IP range.
Statistics and estimates
The total volume of email spam has been consistently growing. The amount of spam users see in their mailboxes is only a portion of total spam sent, since spammers' lists often contain a large percentage of invalid addresses and many spam filters simply delete or reject "obvious spam."
The first known spam email, advertising a DEC product presentation, was sent in 1978 by Gary Thuerk to 600 addresses, which was all the users of ARPANET at the time, though software limitations meant only slightly more than half of the intended recipients actually received it. As of August 2010, the amount of spam was estimated to be around 200 billion spam messages sent per day.
More than 97% of all emails sent over the net are unwanted, according to a Microsoft security report. MAAWG estimates that 85% of incoming mail is "abusive email", as of the second half of 2007. The sample size for the MAAWG's study was over 100 million mailboxes.
A 2010 survey of US and European email users showed that 46% of the respondents had opened spam messages, although only 11% had clicked on a link.
Highest amount of spam received
According to Steve Ballmer, Microsoft founder Bill Gates receives four million emails per year, most of them spam. This was originally incorrectly reported as "per day".
At the same time Jef Poskanzer, owner of the domain name acme.com, was receiving over one million spam emails per day.
Cost of spam
A 2004 survey estimated that lost productivity costs Internet users in the United States $21.58 billion annually, while another reported the cost at $17 billion, up from $11 billion in 2003. In 2004, the worldwide productivity cost of spam has been estimated to be $50 billion in 2005. An estimate of the percentage cost borne by the sender of marketing junk mail (snail mail) is 88%, whereas in 2001 one spam was estimated to cost $0.10 for the receiver and $0.00001 (0.01% of the cost) for the sender.
Origin of spam
Origin or source of spam refers to the geographical location of the computer from which the spam is sent; it is not the country where the spammer resides, nor the country that hosts the spamvertised site. Because of the international nature of spam, the spammer, the hijacked spam-sending computer, the spamvertised server, and the user target of the spam are all often located in different countries. As much as 80% of spam received by Internet users in North America and Europe can be traced to fewer than 200 spammers.
In terms of volume of spam: According to Sophos, the major sources of spam in the fourth quarter of 2008 (October to December) were:
The United States (the origin of 19.8% of spam messages, up from 18.9% in Q3)
China (9.9%, up from 5.4%)
Russia (6.4%, down from 8.3%)
Brazil (6.3%, up from 4.5%)
Turkey (4.4%, down from 8.2%)
When grouped by continents, spam comes mostly from:
Asia (37.8%, down from 39.8%)
North America (23.6%, up from 21.8%)
Europe (23.4%, down from 23.9%)
South America (12.9%, down from 13.2%)
In terms of number of IP addresses: the Spamhaus Project (which measures spam sources in terms of number of IP addresses used for spamming, rather than volume of spam sent) ranks the top three as the United States, China, and Russia, followed by Japan, Canada, and South Korea.
In terms of networks: As of 5 June 2007, the three networks hosting the most spammers are Verizon, AT&T, and VSNL International. Verizon inherited many of these spam sources from its acquisition of MCI, specifically through the UUNet subsidiary of MCI, which Verizon subsequently renamed Verizon Business.
The U.S. Department of Energy Computer Incident Advisory Capability (CIAC) has provided specific countermeasures against electronic mail spamming.
Some popular methods for filtering and refusing spam include email filtering based on the content of the email, DNS-based black-hole lists (DNSBL), greylisting, spamtraps, Enforcing technical requirements of email (SMTP), check-summing systems to detect bulk email, and by putting some sort of cost on the sender via a Proof-of-work system or a micropayment. Each method has strengths and weaknesses and each is controversial because of its weaknesses.
For example, one company's offer to "[remove] some spam-trap and honey-pot addresses" from email lists, defeats the ability for those methods to identify spammers.
In one study, 95% of revenues (in the study) cleared through just three banks.
How spammers operate
Gathering of addresses
In order to send spam, spammers need to obtain the email addresses of the intended recipients. To this end, both spammers themselves and list merchants gather huge lists of potential email addresses.
Since spam is, by definition, unsolicited, this address harvesting is done without the consent (and sometimes against the expressed will) of the address owners. As a consequence, spammers' address lists are inaccurate. A single spam run may target tens of millions of possible addresses – many of which are invalid, malformed, or undeliverable.
Sometimes, if the sent spam is "bounced" or sent back to the sender by various programs that eliminate spam, or if the recipient clicks on an unsubscribe link, that may cause that email address to be marked as "valid", which is interpreted by the spammer as "send me more".
Obfuscating message content
Many spam-filtering techniques work by searching for patterns in the headers or bodies of messages. For instance, a user may decide that all email they receive with the word "Viagra" in the subject line is spam, and instruct their mail program to automatically delete all such messages.
To defeat such filters, the spammer may intentionally misspell commonly filtered words or insert other characters, often in a style similar to leetspeak, as in the following examples: V1agra, Via'gra, Vi@graa, vi*gra, \/iagra. This also allows for many different ways to express a given word, making identifying them all more difficult for filter software.
The principle of this method is to leave the word readable to humans (who can easily recognize the intended word for such misspellings), but not likely to be recognized by a literal computer program. This is only somewhat effective, because modern filter patterns have been designed to recognize blacklisted terms in the various iterations of misspelling.
Other filters target the actual obfuscation methods, such as the non-standard use of punctuation or numerals into unusual places. Similarly, HTML-based email gives the spammer more tools to obfuscate text. Inserting HTML comments between letters can foil some filters, as can including text made invisible by setting the font color to white on a white background, or shrinking the font size to the smallest fine print.
Another common ploy involves presenting the text as an image, which is either sent along or loaded from a remote server. This can be foiled by not permitting an email-program to load images.
As Bayesian filtering has become popular as a spam-filtering technique, spammers have started using methods to weaken it. To a rough approximation, Bayesian filters rely on word probabilities. If a message contains many words which are only used in spam, and few which are never used in spam, it is likely to be spam.
To weaken Bayesian filters, some spammers, alongside the sales pitch, now include lines of irrelevant, random words, in a technique known as Bayesian poisoning. A variant on this tactic may be borrowed from the Usenet abuser known as "Hipcrime" -- to include passages from books taken from Project Gutenberg, or nonsense sentences generated with "dissociated press" algorithms. Randomly generated phrases can create spoetry (spam poetry) or spam art.
Another method used to masquerade spam as legitimate messages is the use of auto-generated sender names in the From: field, ranging from realistic ones such as "Jackie F. Bird" to (either by mistake or intentionally) bizarre attention-grabbing names such as "Sloppiest U. Epiglottis" or "Attentively E. Behavioral". Return addresses are also routinely auto-generated, often using unsuspecting domain owners' legitimate domain names, leading some users to blame the innocent domain owners.
Blocking lists use IP addresses rather than sender domain names, as these are more accurate. A mail purporting to be from example.com can be seen to be faked by looking for the originating IP address in the email's headers; also Sender Policy Framework, for example, helps by stating that a certain domain will only send email from certain IP addresses.
Spam can also be hidden inside a fake "Undelivered mail notification" which looks like the failure notices sent by a mail transfer agent (a "MAILER-DAEMON") when it encounters an error.
A number of other online activities and business practices are considered by anti-spam activists to be connected to spamming. These are sometimes termed spam-support services: business services, other than the actual sending of spam, which permit the spammer to continue operating. Spam-support services can include processing orders for goods advertised in spam, hosting Web sites or DNS records referenced in spam messages, or a number of specific services as follows:
Some Internet hosting firms advertise bulk-friendly or bulletproof hosting. This means that, unlike most ISPs, they will not terminate a customer for spamming. These hosting firms operate as clients of larger ISPs, and many have eventually been taken offline by these larger ISPs as a result of complaints regarding spam activity.
Thus, while a firm may advertise bulletproof hosting, it is ultimately unable to deliver without the connivance of its upstream ISP. However, some spammers have managed to get what is called a pink contract (see below) – a contract with the ISP that allows them to spam without being disconnected.
A few companies produce spamware, or software designed for spammers. Spamware varies widely, but may include the ability to import thousands of addresses, to generate random addresses, to insert fraudulent headers into messages, to use dozens or hundreds of mail servers simultaneously, and to make use of open relays. The sale of spamware is illegal in eight U.S. states.
So-called millions CDs are commonly advertised in spam. These are CD-ROMs purportedly containing lists of email addresses, for use in sending spam to these addresses. Such lists are also sold directly online, frequently with the false claim that the owners of the listed addresses have requested (or "opted in") to be included.
Such lists often contain invalid addresses. In recent years, these have fallen almost entirely out of use due to the low quality email addresses available on them, and because some email lists exceed 20GB in size. The amount you can fit on a CD is no longer substantial.
A number of DNS blacklists (DNSBLs), including the MAPS RBL, Spamhaus SBL, SORBS and SPEWS, target the providers of spam-support services as well as spammers. DNSBLs blacklist IPs or ranges of IPs to persuade ISPs to terminate services with known customers who are spammers or resell to spammers.
Unsolicited bulk email (UBE)
A synonym for email spam.
Unsolicited commercial email (UCE)
Spam promoting a commercial service or product. This is the most common type of spam, but it excludes spam which are hoaxes (e.g. virus warnings), political advocacy, religious messages and chain letters sent by a person to many other people. The term UCE may be most common in the USA.
A pink contract is a service contract offered by an ISP which offers bulk email service to spamming clients, in violation of that ISP's publicly posted acceptable use policy.
Spamvertising is advertising through the medium of spam.
Opt-in, confirmed opt-in, double opt-in, opt-out
Opt-in, confirmed opt-in, double opt-in, opt-out refers to whether the people on a mailing list are given the option to be put in, or taken out, of the list. Confirmation (and "double", in marketing speak) refers to an email address transmitted eg. through a web form being confirmed to actually request joining a mailing list, instead of being added to the list without verification.
Final, Ultimate Solution for the Spam Problem (FUSSP)
An ironic reference to naïve developers who believe they have invented the perfect spam filter, which will stop all spam from reaching users' inboxes while deleting no legitimate email accidentally.
Bacn is an infrequently-used term to refer to email sent to a user who at one time subscribed to a mailing list - not unsolicited, but also not personal.
Section word is discussed. | Top of Page!
E-tailer (Online shopping)
Online shopping is the process whereby consumers directly buy goods or services from a seller in real-time, without an intermediary service, over the Internet. It is a form of electronic commerce.
An online shop, E-tailer, eshop, e-store, internet shop, webshop, webstore, online store, or virtual store evokes the physical analogy of buying products or services at a bricks-and-mortar retailer or in a shopping centre. The process is called Business-to-Consumer (B2C) online shopping. When a business buys from another business it is called Business-to-Business (B2B) online shopping.
In 1990 Tim Berners-Lee created the first World Wide Web server and browser. It opened for commercial use in 1991. In 1994 other advances took place, such as online banking and the opening of an online pizza shop by Pizza Hut.
During that same year, Netscape introduced SSL encryption of data transferred online, which has become essential for secure online shopping. Also in 1994 the German company Intershop introduced its first online shopping system. In 1995 Amazon launched its online shopping site, and in 1996 eBay appeared.
In recent years, online shopping has become popular. In order to shop online, one must be able to have access to a computer as well as a credit card or debit card. Shopping has evolved with the growth of technology. According to research found in the Journal of Electronic Commerce, if one focuses on the demographic characteristics of the in-home shopper, in general, the higher the level of education, income, and occupation of the head of the household, the more favorable the perception of non-store shopping, Enrique (2005).
This is the Impact of Internet User Shopping Patterns and Demographics on Consumer Mobile Buying Behaviors. Journal of Electronic Commerce Research, an influential factor in consumer attitude towards non-store shopping is exposure to technology, since it has been demonstrated that increased exposure to technology increases the probability of developing favorable attitudes towards new shopping channels.
Consumers find a product of interest by visiting the website of the retailer directly, or do a search across many different vendors using a shopping search engine.
Once a particular product has been found on the web site of the seller, most online retailers use shopping cart software to allow the consumer to accumulate multiple items and to adjust quantities, by analogy with filling a physical shopping cart or basket in a conventional store.
A "checkout" process follows (continuing the physical-store analogy) in which payment and delivery information is collected, if necessary. Some stores allow consumers to sign up for a permanent online account so that some or all of this information only needs to be entered once.
The consumer often receives an e-mail confirmation once the transaction is complete. Less sophisticated stores may rely on consumers to phone or e-mail their orders (though credit card numbers are not accepted by e-mail, for security reasons).
Online shoppers commonly use a credit card to make payments, however some systems enable users to create accounts and pay by alternative means, such as:
Billing to mobile phones and landlines
Cash on delivery (C.O.D., offered by very few online stores)
Direct debit in some countries
Electronic money of various types
Postal money order
Wire transfer/delivery on payment
Some sites will not accept international credit cards, some require both the purchaser's billing address and shipping address to be in the same country in which site does its business, and still other sites allow customers from anywhere to send gifts anywhere. The financial part of a transaction might be processed in real time (for example, letting the consumer know their credit card was declined before they log off), or might be done later as part of the fulfillment process.
Once a payment has been accepted the goods or services can be delivered in the following ways.
Downloading: This is the method often used for digital media products such as software, music, movies, or images.
Drop shipping: The order is passed to the manufacturer or third-party distributor, who ships the item directly to the consumer, bypassing the retailer's physical location to save time, money, and space.
In-store pickup: The customer orders online, finds a local store using locator software and picks the product up at the closest store. This is the method often used in the bricks and clicks business model.
Printing out, provision of a code for, or emailing of such items as admission tickets and scrip (e.g., gift certificates and coupons). The tickets, codes, or coupons may be redeemed at the appropriate physical or online premises and their content reviewed to verify their eligility (e.g., assurances that the right of admission or use is redeemed at the correct time and place, for the correct dollar amount, and for the correct number of uses).
Shipping: The product is shipped to the customer's address or that of a customer-designated third party.
Will call, COBO (in Care Of Box Office), or "at the door" pickup: The patron picks up pre-purchased tickets for an event, such as a play, sporting event, or concert, either just before the event or in advance. With the onset of the Internet and e-commerce sites, which allow customers to buy tickets online, the popularity of this service has increased.
Shopping cart systems
Simple systems allow the offline administration of products and categories. The shop is then generated as HTML files and graphics that can be uploaded to a webspace. These systems do not use an online database.
A high end solution can be bought or rented as a standalone program or as an addition to an enterprise resource planning program. It is usually installed on the company's own webserver and may integrate into the existing supply chain so that ordering, payment, delivery, accounting and warehousing can be automated to a large extent.
Other solutions allow the user to register and create an online shop on a portal that hosts multiple shops at the same time.
Open source shopping cart packages include advanced platforms such as Interchange, and off the shelf solutions as Avactis, Satchmo, osCommerce, Magento, Zen Cart, VirtueMart, Batavi and PrestaShop.
Commercial systems can also be tailored to one's needs so the shop does not have to be created from scratch. By using a pre-existing framework, software modules for various functionalities required by a web shop can be adapted and combined.
Like many online auction websites, many websites allow small businesses to create and maintain an online shops (ecommerce online shopping carts), without the complexity that involved in purchasing and developing an expensive stand alone ecommerce software solutions.
Why does electronic shopping exist? For customers it is not only because of the high level of convenience, but also because of the broader selection; competitive pricing and greater access to information. For organizations it increases their customer value and the building of sustainable capabilities, next to the increased profits.
Designers of online shops should consider the effects of information load. Mehrabian and Russel (1974) introduced the concept of information rate (load) as the complex spatial and temporal arrangements of stimuli within a setting. The notion of information load is directly related to concerns about whether consumers can be given too much information in virtual shopping environments.
Compared with conventional retail shopping, computer shopping enriches the information environment of virtual shopping by providing additional product information, such as comparative products and services, as well as various alternatives and attributes of each alternative, etc.
Two major sub-dimensions have been identified for information load: complexity and novelty. Complexity refers to the number of different elements or features of a site, which can be the result of increased information diversity. Novelty involves the unexpected, suppressing, new, or unfamiliar aspects of the site. A research by Huang (2000) showed that the novelty dimension kept consumers exploring the shopping sites, whereas the complexity dimension has the potential to induce impulse purchases.
The main idea of online shopping is not just in having a good looking website that could be listed in a lot of search engines or the art behind the site. It also is not only just about disseminating information, because it is also about building relationships and making money.
Mostly, organizations try to adopt techniques of online shopping without understanding these techniques and/or without a sound business model. Rather than supporting the organization's culture and brand name, the website should satisfy consumer's expectations.
A majority of consumers choose online shopping for a faster and more efficient shopping experience. Many researchers notify that the uniqueness of the web has dissolved and the need for the design, which will be user centered, is very important. Companies should always remember that there are certain things, such as understanding the customer's wants and needs, living up to promises, never go out of style, because they give reason to come back.
And the reason will stay if consumers always get what they expect. McDonaldization theory can be used in terms of online shopping, because online shopping is becoming more and more popular and a website that wants to gain more shoppers will use four major principles of McDonaldization: efficiency, calculability, predictability and control.
Organizations, which want people to shop more online with them, should consume extensive amounts of time and money to define, design, develop, test, implement, and maintain the website. Also if a company wants their website to be popular among online shoppers it should leave the user with a positive impression about the organization, so consumers can get an impression that the company cares about them.
The organization that wants to be accepted in online shopping needs to remember, that it is easier to lose a customer then to gain one. Lots of researchers state that even when a site was "top-rated", it would go nowhere if the organization failed to live up to common etiquette, such as returning e-mails in a timely fashion, notifying customers of problems, being honest, and being good stewards of the customers' data.
Organizations that want to keep their customers or gain new ones should try to get rid of all mistakes and be more appealing to be more desirable for online shoppers. And this is why many designers of webshops consider research outcomes concerning consumer expectations. Research conducted by Elliot and Fowell (2000) revealed satisfactory and unsatisfactory customer experiences.
It is important to take the country and customers into account. For example, in Japan privacy is very important and emotional involvement is more important on a pension's site than on a shopping site. Next to that, there is a difference in experience: experienced users focus more on the variables that directly influence the task, while novice users are focusing more on understanding the information.
There are several techniques for the inspection of the usability. The ones used in the research of Chen & Macredie (2005) are Heuristic evaluation, cognitive walk through and the user testing. Every technique has its own (dis-)advantages and it is therefore important to check per situation which technique is appropriate.
When the customers went to the online shop, a couple of factors determine whether they will return to the site. The most important factors are the ease of use and the presence of user-friendly features.
The System Itself
The Shopping Cart system works like the name suggests. Firstly, the customer must choose the product desired from the source. Once this step is done, an option to add the product to the cart will be given, this step will assure that the item you desire will be bought.
Once the customer has finished browsing for other potential purchases and has decided that the product he chose previously is the one that he wants to buy, the user must then follow the steps provided by the website in order to fulfill the transaction (normally requires an active account and/or registration method). Lastly, the order will then be sent to the desired address at a predicted date, which may alter depending on the speed of the process.
E-commerce B2C product sales totaled $142.5 billion, representing about 8% of retail product sales in the United States. The $26 billion worth of clothes sold online represented about 13% of the domestic market, and with 72% of women looking online for apparel, it has become one of the most popular cross-shopping categories. Forrester Research estimates that the United States online retail industry will be worth $279 billion in 2015.
For developing countries and low-income households in developed countries, adoption of e-commerce in place of or in addition to conventional methods is limited by a lack of affordable Internet access.
Online stores are usually available 24 hours a day, and many consumers have Internet access both at work and at home. Other establishments such as internet cafes and schools provide access as well. A visit to a conventional retail store requires travel and must take place during business hours.
In the event of a problem with the item – it is not what the consumer ordered, or it is not what they expected – consumers are concerned with the ease with which they can return an item for the correct one or for a refund. Consumers may need to contact the retailer, visit the post office and pay return shipping, and then wait for a replacement or refund. Some online companies have more generous return policies to compensate for the traditional advantage of physical stores.
For example, the online shoe retailer Zappos.com includes labels for free return shipping, and does not charge a restocking fee, even for returns which are not the result of merchant error. (Note: In the United Kingdom, online shops are prohibited from charging a restocking fee if the consumer cancels their order in accordance with the Consumer Protection (Distance Selling) Act 2000.)
Information and reviews
Online stores must describe products for sale with text, photos, and multimedia files, whereas in a physical retail store, the actual product and the manufacturer's packaging will be available for direct inspection (which might involve a test drive, fitting, or other experimentation).
Some online stores provide or link to supplemental product information, such as instructions, safety procedures, demonstrations, or manufacturer specifications. Some provide background information, advice, or how-to guides designed to help consumers decide which product to buy.
Some stores even allow customers to comment or rate their items. There are also dedicated review sites that host user reviews for different products. Reviews and now blogs gives customers the option of shopping cheaper org anise purchases from all over the world without having to depend on local retailers.
In a conventional retail store, clerks are generally available to answer questions. Some online stores have real-time chat features, but most rely on e-mail or phone calls to handle customer questions. ...
Price and selection
One advantage of shopping online is being able to quickly seek out deals for items or services with many different vendors (though some local search engines do exist to help consumers locate products for sale in nearby stores). Search engines, online price comparison services and discovery shopping engines can be used to look up sellers of a particular product or service.
Shipping costs (if applicable) reduce the price advantage of online merchandise, though depending on the jurisdiction, a lack of sales tax may compensate for this.
Shipping a small number of items, especially from another country, is much more expensive than making the larger shipments bricks-and-mortar retailers order. Some retailers (especially those selling small, high-value items like electronics) offer free shipping on sufficiently large orders.
Another major advantage for retailers is the ability to rapidly switch suppliers and vendors without disrupting users' shopping experience..
Fraud and security concerns
Given the lack of ability to inspect merchandise before purchase, consumers are at higher risk of fraud on the part of the merchant than in a physical store. Merchants also risk fraudulent purchases using stolen credit cards or fraudulent repudiation of the online purchase. With a warehouse instead of a retail storefront, merchants face less risk from physical theft.
Secure Sockets Layer (SSL) encryption has generally solved the problem of credit card numbers being intercepted in transit between the consumer and the merchant. Identity theft is still a concern for consumers when hackers break into a merchant's web site and steal names, addresses and credit card numbers.
A number of high-profile break-ins in the 2000s has prompted some U.S. states to require disclosure to consumers when this happens. Computer security has thus become a major concern for merchants and e-commerce service providers, who deploy countermeasures such as firewalls and anti-virus software to protect their networks.
Phishing is another danger, where consumers are fooled into thinking they are dealing with a reputable retailer, when they have actually been manipulated into feeding private information to a system operated by a malicious party. Denial of service attacks are a minor risk for merchants, as are server and network outages.
Quality seals can be placed on the Shop web page if it has undergone an independent assessment and meets all requirements of the company issuing the seal. The purpose of these seals is to increase the confidence of the online shoppers; the existence of many different seals, or seals unfamiliar to consumers, may foil this effort to a certain extent. A number of resources offer advice on how consumers can protect themselves when using online retailer services. These include:
Sticking with known stores, or attempting to find independent consumer reviews of their experiences; also ensuring that there is comprehensive contact information on the website before using the service, and noting if the retailer has enrolled in industry oversight programs such as trust mark or trust seal.
Before buying from a new company, evaluate the website by considering issues such as: the professionalism and user-friendliness of the site; whether or not the company lists a telephone number and/or street address along with e-contact information; whether a fair and reasonable refund and return policy is clearly stated; and whether there are hidden price inflators, such as excessive shipping and handling charges.
Ensuring that the vendor address is protected with SSL (see above) when entering credit card information. If it does the address on the credit card information entry screen will start with "HTTPS".
Using strong passwords, without personal information. Another option is a "pass phrase," which might be something along the lines: "I shop 4 good a buy!!" These are difficult to hack, and provides a variety of upper, lower, and special characters and could be site specific and easy to remember.
Although the benefits of online shopping are considerable, when the process goes poorly it can create a thorny situation. A few problems that shoppers potentially face include identity theft, faulty products, and the accumulation of spyware.
Whenever you purchase a product, you are going to be required to put in your credit card information and billing/shipping address. If the website is not secure a customers’ information can be accessible to anyone who knows how to obtain it. Most large online corporations are inventing new ways to make fraud more difficult, however, the criminals are constantly responding to these developments with new ways to manipulate the system.
Even though these efforts are making it easier to protect yourself online, it is a constant fight to maintain the lead. It is advisable to be aware of the most current technology and scams out there to fully protect yourself and your finances. One of the hardest areas to deal with in online shopping is the delivery of the products. Most companies offer shipping insurance in case the product is lost or damaged; however, if the buyer opts not to purchase insurance on their products, they are generally out of luck.
Some shipping companies will offer refunds or compensation for the damage, but it is up to their discretion if this will happen. It is important to realize that once the product leaves the hands of the seller, they have no responsibility (provided the product is what the buyer ordered and is in the specified condition).
Lack of full cost disclosure
The lack of full disclosure with regards to the total cost of purchase is one of the concerns of online shopping. While it may be easy to compare the base price of an item online, it may not be easy to see the total cost up front as additional fees such as shipping are often not be visible until the final step in the checkout process.
The problem is especially evident with cross-border purchases, where the cost indicated at the final checkout screen may not include additional fees that must be paid upon delivery such as duties and brokerage. Some services such as the Canadian based Wishabi attempts to include estimates of these additional costs, but nevertheless, the lack of general full cost disclosure remains a concern.
Privacy of personal information is a significant issue for some consumers. Different legal jurisdictions have different laws concerning consumer privacy, and different levels of enforcement. Many consumers wish to avoid spam and telemarketing which could result from supplying contact information to an online merchant. In response, many merchants promise not to use consumer information for these purposes, or provide a mechanism to opt-out of such contacts.
Many websites keep track of consumers shopping habits in order to suggest items and other websites to view. Brick-and-mortar stores also collect consumer information. Some ask for address and phone number at checkout, though consumers may refuse to provide it. Many larger stores use the address information encoded on consumers' credit cards (often without their knowledge) to add them to a catalog mailing list. This information is obviously not accessible to the merchant when paying in cash.
Typically, only simple pictures and or descriptions of the item are all a customer can rely on when shopping on online stores. If the customer does not have prior exposure to the item's handling qualities, they will not have a full understanding of the item they are buying. However, Written and Video Reviews are readily available from consumers who have purchased similar items in the past. These can be helpful for prospective customers, but these reviews can be sometimes subjective and based on personal preferences that may not reflect end-user satisfaction once the item has been received.
Many successful purely virtual companies deal with digital products, (including information storage, retrieval, and modification), music, movies, office supplies, education, communication, software, photography, and financial transactions. Other successful marketers use Drop shipping or affiliate marketing techniques to facilitate transactions of tangible goods without maintaining real inventory.
Some non-digital products have been more successful than others for online stores. Profitable items often have a high value-to-weight ratio, they may involve embarrassing purchases, they may typically go to people in remote locations, and they may have shut-ins as their typical purchasers. Items which can fit in a standard mailbox — such as music CDs, DVDs and books — are particularly suitable for a virtual marketer.
Products such as spare parts, both for consumer items like washing machines and for industrial equipment like centrifugal pumps, also seem good candidates for selling online. Retailers often need to order spare parts specially, since they typically do not stock them at consumer outlets—in such cases, e-commerce solutions in spares do not compete with retail stores, only with other ordering systems.
A factor for success in this niche can consist of providing customers with exact, reliable information about which part number their particular version of a product needs, for example by providing parts lists keyed by serial number.
Products less suitable for e-commerce include products that have a low value-to-weight ratio, products that have a smell, taste, or touch component, products that need trial fittings — most notably clothing — and products where colour integrity appears important. Nonetheless, Tesco.com has had success delivering groceries in the UK, albeit that many of its goods are of a generic quality, and clothing sold through the internet is big business in the U.S.
High-volume websites, such as Yahoo!, Amazon.com and eBay, offer hosting services for online stores to all size retailers. These stores are presented within an integrated navigation framework. Collections of online stores are sometimes known as virtual shopping malls or online marketplaces.
The impact of others’ review on consumer behaviors
One of the great benefits of online shopping is the ability to read others’ reviews, which could be from experts or simply fellow shoppers on one product and service.
The Nielsen Company conducted a survey in March 2010 and polled more than 27,000 Internet users in 55 markets from the Asia-Pacific, Europe, Middle East, North America and South America to look at questions such as “How do consumers shop online?”, “What do they intend to buy?”, “How do they use various online shopping web pages?”, and the impact of social media and other factors that come into play when consumers are trying to decide how to spend their money on which product or service.
According to that research, reviews on electronics (57%) such as DVD players, cell phones or PlayStations and so on, reviews on cars (45%), and reviews on software (37%) play an important role and have influence on consumers who tend to make purchases and buy online.
In addition to online reviews, peer recommendations on the online shopping pages or social media play a key role for online shoppers while researching future purchases of electronics, cars and travel or concert bookings. On the other hand, according to the same research, 40% of online shoppers indicate that they would not even buy electronics' without consulting online reviews first.
Section word is discussed. | Top of Page!
Hardware security module
A hardware security module (often abbreviated to HSM) is a type of secure cryptoprocessor targeted at managing digital keys, accelerating cryptoprocesses in terms of digital signings/second and for providing strong authentication to access critical keys for server applications. These modules are physical devices that traditionally come in the form of a plug-in card or an external TCP/IP security device that can be attached directly to the server or general purpose computer.
The goals of an HSM are (a) onboard secure generation, (b) onboard secure storage, (c) use of cryptographic and sensitive data material, (d) offloading application servers for complete asymmetric and symmetric cryptography. HSMs provide both logical and physical protection of these materials from non-authorized use and potential adversaries. In short, they protect high-value cryptographic keys.
The cryptographic material handled by most HSMs are asymmetric key pairs (and certificates) used in public-key cryptography. Some HSMs can also handle symmetric keys and other arbitrary data.
Many HSM systems have means to securely backup the keys they handle either in a wrapped form via the computer's operating system or externally using a smartcard or some other security token. HSMs should never allow for secrets' exportation in plaintext form, even when migrating between HSMs or performing backup operations.
Some HSM systems are also hardware cryptographic accelerators. They usually cannot beat the performance of software-only solutions for symmetric key operations. However, with performances ranges from 1 to 7,000 1024-bit RSA signs/second, HSM's can provide significant CPU offload for asymmetric key operations. Since NIST is recommending the use of 2,048 bit RSA keys from year 2010, performance at longer key sizes is becoming increasingly important.
Because HSMs are often part of a mission-critical infrastructure such as a public key infrastructure or online banking application, HSMs can typically be clustered for high availability. Some HSMs feature dual power supplies to enable business continuity.
A few of the HSMs available in the market have the ability to execute specially developed execution modules within the HSM's secure enclosure. Such ability is useful, for example, in cases where special algorithms or business logic has to be executed in a secured and controlled environment.
The execution modules can be developed in native C language, in .NET, Java or other programming languages. While providing the benefit securing application-specific code, these execution engines confuse the status of an HSM's FIPS or Common Criteria validation status.
Section word is discussed. | Top of Page!
Internet marketing, also known as digital marketing, web marketing, online marketing, search marketing or e-marketing, is referred to as the marketing (generally promotion) of products or services over the Internet. iMarketing is used as an abbreviated form for Internet Marketing
Internet marketing is considered to be broad in scope because it not only refers to marketing on the Internet, but also includes marketing done via e-mail and wireless media. Digital customer data and electronic customer relationship management (ECRM) systems are also often grouped together under internet marketing.
Internet marketing ties together the creative and technical aspects of the Internet, including design, development, advertising, and sales. Internet marketing also refers to the placement of media along many different stages of the customer engagement cycle through search engine marketing (SEM), search engine optimization (SEO), banner ads on specific websites, email marketing, mobile advertising, and Web 2.0 strategies.
In 2008, The New York Times, working with comScore, published an initial estimate to quantify the user data collected by large Internet-based companies. Counting four types of interactions with company websites in addition to the hits from advertisements served from advertising networks, the authors found that the potential for collecting data was up to 2,500 times per user per month. Types of Internet Marketing iMarketing is broadly divided in to the following three types,
- PPC/I (Pay per click/impression): Advertisements for brands and products is done on pre-decided websites and helps in generating leads for companies.
- SEO Search engine optimization is the process of improving the visibility of a website or a web page in search engines via the "natural" or un-paid ("organic" or "algorithmic") search results.
- Social media marketing (SMM)
Internet marketing is associated with several business models:
- E-commerce: a model whereby goods are sold directly to consumers (B2C), businesses (B2B), or from consumer to consumer (C2C).
- Lead-based websites: a strategy whereby an organization generates value by acquiring sales leads from its website. Similar to walk-in customers in retail world. These prospects are often referred to as organic leads.
- Affiliate Marketing: a process wherein a product or service developed by one entity is sold by other active sellers for a share of profits. The entity that owns the product may provide some marketing material (e.g., sales letters, affiliate links, tracking facilities, etc.); however, the vast majority of affiliate marketing relationships come from e-commerce businesses that offer affiliate programs.
- Local Internet marketing: a strategy through which a small company utilizes the Internet to find and to nurture relationships that can be used for real-world advantages. Local Internet marketing uses tools such as social media marketing, local directory listing, and targeted online sales promotions.
In a one-to-one approach, marketers target a user browsing the Internet alone and so that the marketers' messages reach the user personally. This approach is used in search marketing, for which the advertisements are based on search engine keywords entered by the users. This approach usually works under the pay per click (PPC) method.
Appeal to specific interests
When appealing to specific interests, marketers place an emphasis on appealing to a specific behavior or interest, rather than reaching out to a broadly defined demographic. These marketers typically segment their markets according to age group, gender, geography, and other general factors.
Niche and hyper-niche internet marketing put further emphasis on creating destinations for web users and consumers on specific topics and products. Niche marketers differ from traditional Internet marketers as they have a more specialized topic knowledge.
For example, whereas in traditional Internet marketing a website would be created and promoted on a high-level topic such as kitchen appliances, niche marketing would focus on more specific topics such as 4-slice toasters. Niche marketing provides end users of such sites different targeted information, and allows the creators to establish themselves as authorities on the topic or product.
In Internet marketing, geo targeting and geo marketing are the methods of determining the geolocation of a website visitor with geolocation software, and delivering different content to that visitor based on his or her location, such as latitude and longitude, country, region or state, city, metro code or zip code, organization, Internet Protocol (IP) address, ISP, and other criteria.
Internet marketing principles
6 key principles of persuasion by Robert Cialdini
One of the most famous guides to internet marketing is the book by Robert Cialdini "Influence: The Psychology of Persuasion." According to Cialdini there are 6 key principles of persuasion: 1. Reciprocity, 2. Commitment and Consistency, 3. Social Proof/Consensus, 4. Authority, 5. Liking, 6. Scarcity.
Advantages and limitations of Internet marketing
Internet marketing is inexpensive when examining the ratio of cost to the reach of the target audience. Companies can reach a wide audience for a small fraction of traditional advertising budgets. The nature of the medium allows consumers to research and to purchase products and services conveniently. Therefore, businesses have the advantage of appealing to consumers in a medium that can bring results quickly. The strategy and overall effectiveness of marketing campaigns depend on business goals and cost-volume-profit (CVP) analysis.
Internet marketers also have the advantage of measuring statistics easily and inexpensively; almost all aspects of an Internet marketing campaign can be traced, measured, and tested, in many cases through the use of an ad server. The advertisers can use a variety of methods, such as pay per impression, pay per click, pay per play, and pay per action.
Therefore, marketers can determine which messages or offerings are more appealing to the audience. The results of campaigns can be measured and tracked immediately because online marketing initiatives usually require users to click on an advertisement, to visit a website, and to perform a targeted action.
However, from the buyer's perspective, the inability of shoppers to touch, to smell, to taste, and "to try on" tangible goods before making an online purchase can be limiting. However, there is an industry standard for e-commerce vendors to reassure customers by having liberal return policies as well as providing in-store pick-up services.
Information security is important both to companies and consumers that participate in online business. Many consumers are hesitant to purchase items over the Internet because they do not believe that their personal information will remain private.
Some companies that purchase customer information offer the option for individuals to have their information removed from their promotional redistribution, also known as opting out. However, many customers are unaware if and when their information is being shared, and are unable to stop the transfer of their information between companies if such activity occurs.
Additionally, companies holding private information are vulnerable to data attacks and leaks. Internet browsing privacy is a related consumer concern. Web sites routinely capture browsing and search history which can be used to provide targeted advertising. Privacy policies can provide transparency to these practices. Spyware prevention software can also be used to shield the consumer.
Another consumer e-commerce concern is whether or not they will receive exactly what they purchase. Online merchants have attempted to address this concern by investing in and building strong consumer brands (e.g., Amazon.com, eBay, and Overstock.com), and by leveraging merchant and feedback rating systems and e-commerce bonding solutions.
All these solutions attempt to assure consumers that their transactions will be free of problems because the merchants can be trusted to provide reliable products and services. Additionally, several major online payment mechanisms (credit cards, PayPal, Google Checkout, etc.) have provided back-end buyer protection systems to address problems if they occur.
As a result, advertisers can more effectively engage and connect their audience with their campaigns that seek to shape consumer attitudes and feelings towards specific products and services.
Effects on industries
The number of banks offering the ability to perform banking tasks over the internet has increased. Online banking appeals to customers because it is often faster and considered more convenient than visiting bank branches.
Internet auctions have become a multi-billion dollar business. Unique items that could only previously be found at flea markets are now being sold on Internet auction websites such as eBay. Specialized e-stores sell a vast amount of items like antiques, movie props, clothing, gadgets, and so on.
As the premier online reselling platform, eBay is often used as a price-basis for specialized items. Buyers and sellers often look at prices on the website before going to flea markets; the price shown on eBay often becomes the item's selling price.
In addition to the major effect internet marketing has had on the technology industry, the effect on the advertising industry itself has been profound. In just a few years, online advertising has grown to be worth tens of billions of dollars annually. PricewaterhouseCoopers reported that US$16.9 billion was spent on Online marketing in the U.S. in 2006.
This has caused a growing impact on the United States' electoral process. In 2008, candidates for President heavily utilized Internet marketing strategies to reach constituents. During the 2007 primaries candidates added, on average, over 500 social network supporters per day to help spread their message. President Barack Obama raised over US$1 million in one day during his extensive Democratic candidacy campaign, largely due to online donors.
Several industries have heavily invested in and benefited from internet marketing and online advertising. Some of them were originally brick and mortar businesses such as publishing, music, automotive or gambling, while others have sprung up as purely online businesses, such as digital design and media, blogging, and internet service hosting.
Section word is discussed. | Top of Page!
Inventory means a list compiled for some formal purpose, such as the details of an estate going to probate, or the contents of a house let furnished. This remains the prime meaning in British English.
In the USA and Canada the term has developed from a list of goods and materials to the goods and materials themselves, especially those held available in stock by a business; and this has become the primary meaning of the term in North American English, equivalent to the term "stock" in British English. In accounting, inventory or stock is considered an asset.
Inventory management is primarily about specifying the shape and percentage of stocked goods. It is required at different locations within a facility or within many locations of a supply network to proceed the regular and planned course of production and stock of materials.
The scope of inventory management concerns the fine lines between replenishment lead time, carrying costs of inventory, asset management, inventory forecasting, inventory valuation, inventory visibility, future inventory price forecasting, physical inventory, available physical space for inventory, quality management, replenishment, returns and defective goods and demand forecasting.
Balancing these competing requirements leads to optimal inventory levels, which is an on-going process as the business needs shift and react to the wider environment. Inventory management involves a retailer seeking to acquire and maintain a proper merchandise assortment while ordering, shipping, handling, and related costs are kept in check.
It also involves systems and processes that identify inventory requirements, set targets, provide replenishment techniques, report actual and projected inventory status and that handles all functions related to the tracking and management of material. This would include the monitoring of material moved into and out of stockroom locations and the reconciling of the inventory balances.
Also may include ABC analysis, lot tracking, cycle counting support etc. Management of the inventories, with the primary objective of determining/controlling stock levels within the physical distribution function to balance the need for product availability against the need for minimizing stock holding and handling costs.
The reasons for keeping stock
There are three basic reasons for keeping an inventory:
- Time - The time lags present in the supply chain, from supplier to user at every stage, requires that you maintain certain amounts of inventory to use in this "lead time." However, in practice, inventory is to be maintained for consumption during 'variations in lead time'. Lead time itself can be addressed by ordering that many days in advance.
- Uncertainty - Inventories are maintained as buffers to meet uncertainties in demand, supply and movements of goods.
- Economies of scale - Ideal condition of "one unit at a time at a place where a user needs it, when he needs it" principle tends to incur lots of costs in terms of logistics. So bulk buying, movement and storing brings in economies of scale, thus inventory.
All these stock reasons can apply to any owner or product
Special terms used in dealing with inventory
- Stock Keeping Unit (SKU) is a unique combination of all the components that are assembled into the purchasable item. Therefore, any change in the packaging or product is a new SKU. This level of detailed specification assists in managing inventory.
- Stockout means running out of the inventory of an SKU.
- "New old stock" (sometimes abbreviated NOS) is a term used in business to refer to merchandise being offered for sale that was manufactured long ago but that has never been used. Such merchandise may not be produced anymore, and the new old stock may represent the only market source of a particular item at the present time.
- Buffer/safety stock
- Cycle stock (Used in batch processes, it is the available inventory, excluding buffer stock)
- De-coupling (Buffer stock held between the machines in a single process which serves as a buffer for the next one allowing smooth flow of work instead of waiting the previous or next machine in the same process)
- Anticipation stock (Building up extra stock for periods of increased demand - e.g. ice cream for summer)
- Pipeline stock (Goods still in transit or in the process of distribution - have left the factory but not arrived at the customer yet)
While accountants often discuss inventory in terms of goods for sale, organizations - manufacturers, service-providers and not-for-profits - also have inventories (fixtures, furniture, supplies, ...) that they do not intend to sell. Manufacturers', distributors', and wholesalers' inventory tends to cluster in warehouses.
Retailers' inventory may exist in a warehouse or in a shop or store accessible to customers. Inventories not intended for sale to customers or to clients may be held in any premises an organization uses. Stock ties up cash and, if uncontrolled, it will be impossible to know the actual level of stocks and therefore impossible to control them.
While the reasons for holding stock were covered earlier, most manufacturing organizations usually divide their "goods for sale" inventory into:
- Raw materials - materials and components scheduled for use in making a product.
- Work in process, WIP - materials and components that have begun their transformation to finished goods.
- Finished goods - goods ready for sale to customers.
- Goods for resale - returned goods that are salable.
A canned food manufacturer's materials inventory includes the ingredients to form the foods to be canned, empty cans and their lids (or coils of steel or aluminum for constructing those components), labels, and anything else (solder, glue, ...) that will form part of a finished can. The firm's work in process includes those materials from the time of release to the work floor until they become complete and ready for sale to wholesale or retail customers.
This may be vats of prepared food, filled cans not yet labeled or sub-assemblies of food components. It may also include finished cans that are not yet packaged into cartons or pallets. Its finished good inventory consists of all the filled and labeled cans of food in its warehouse that it has manufactured and wishes to sell to food distributors (wholesalers), to grocery stores (retailers), and even perhaps to consumers through arrangements like factory stores and outlet centers.
Examples of case studies are very revealing, and consistently show that the improvement of inventory management has two parts: the capability of the organisation to manage inventory, and the way in which it chooses to do so. For example, a company may wish to install a complex inventory system, but unless there is a good understanding of the role of inventory and its parameters, and an effective business process to support that, the system cannot bring the necessary benefits to the organization in isolation.
Typical Inventory Management techniques include Pareto Curve ABC Classification and Economic Order Quantity Management. A more sophisticated method takes these two techniques further, combining certain aspects of each to create The K Curve Methodology. A case study of k-curve benefits to one company shows a successful implementation.
Unnecessary inventory adds enormously to the working capital tied up in the business, as well as the complexity of the supply chain. Reduction and elimination of these inventory 'wait' states is a key concept in Lean. Too big an inventory reduction too quickly can cause a business to be anorexic.
There are well-proven processes and techniques to assist in inventory planning and strategy, both at the business overview and part number level. Many of the big MRP/and ERP systems do not offer the necessary inventory planning tools within their integrated planning applications.
Principle of inventory proportionality
Inventory proportionality is the goal of demand-driven inventory management. The primary optimal outcome is to have the same number of days' (or hours', etc.) worth of inventory on hand across all products so that the time of run-out of all products would be simultaneous.
In such a case, there is no "excess inventory," that is, inventory that would be left over of another product when the first product runs out. Excess inventory is sub-optimal because the money spent to obtain it could have been utilized better elsewhere, i.e. to the product that just ran out.
The secondary goal of inventory proportionality is inventory minimization. By integrating accurate demand forecasting with inventory management, replenishment inventories can be scheduled to arrive just in time to replenish the product destined to run out first, while at the same time balancing out the inventory supply of all products to make their inventories more proportional, and thereby closer to achieving the primary goal.
Accurate demand forecasting also allows the desired inventory proportions to be dynamic by determining expected sales out into the future; this allows for inventory to be in proportion to expected short-term sales or consumption rather than to past averages, a much more accurate and optimal outcome.
Integrating demand forecasting into inventory management in this way also allows for the prediction of the "can fit" point when inventory storage is limited on a per-product basis.
The technique of inventory proportionality is most appropriate for inventories that remain unseen by the consumer. As opposed to "keep full" systems where a retail consumer would like to see full shelves of the product they are buying so as not to think they are buying something old, unwanted or stale; and differentiated from the "trigger point" systems where product is reordered when it hits a certain level; inventory proportionality is used effectively by just-in-time manufacturing processes and retail applications where the product is hidden from view.
One early example of inventory proportionality used in a retail application in the United States is for motor fuel. Motor fuel (e.g. gasoline) is generally stored in underground storage tanks. The motorists do not know whether they are buying gasoline off the top or bottom of the tank, nor need they care. Additionally, these storage tanks have a maximum capacity and cannot be overfilled.
Finally, the product is expensive. Inventory proportionality is used to balance the inventories of the different grades of motor fuel, each stored in dedicated tanks, in proportion to the sales of each grade. Excess inventory is not seen or valued by the consumer, so it is simply cash sunk (literally) into the ground.
Inventory proportionality minimizes the amount of excess inventory carried in underground storage tanks. This application for motor fuel was first developed and implemented by Petrolsoft Corporation in 1990 for Chevron Products Company. Most major oil companies use such systems today.
The use of inventory proportionality in the United States is thought to have been inspired by Japanese just-in-time parts inventory management made famous by Toyota Motors in the 1980s.
High-level inventory management
It seems that around 1880 there was a change in manufacturing practice from companies with relatively homogeneous lines of products to vertically integrated companies with unprecedented diversity in processes and products. Those companies (especially in metalworking) attempted to achieve success through economies of scope - the gains of jointly producing two or more products in one facility.
The managers now needed information on the effect of product-mix decisions on overall profits and therefore needed accurate product-cost information. A variety of attempts to achieve this were unsuccessful due to the huge overhead of the information processing of the time. However, the burgeoning need for financial reporting after 1900 created unavoidable pressure for financial accounting of stock and the management need to cost manage products became overshadowed.
In particular, it was the need for audited accounts that sealed the fate of managerial cost accounting. The dominance of financial reporting accounting over management accounting remains to this day with few exceptions, and the financial reporting definitions of 'cost' have distorted effective management 'cost' accounting since that time. This is particularly true of inventory.
Hence, high-level financial inventory has these two basic formulas, which relate to the accounting period:
- Cost of Beginning Inventory at the start of the period + inventory purchases within the period + cost of production within the period = cost of goods available
- Cost of goods available − cost of ending inventory at the end of the period = cost of goods sold
The benefit of these formulae is that the first absorbs all overheads of production and raw material costs into a value of inventory for reporting. The second formula then creates the new start point for the next period and gives a figure to be subtracted from the sales price to determine some form of sales-margin figure.
Manufacturing management is more interested in inventory turnover ratio or average days to sell inventory since it tells them something about relative inventory levels.
Inventory turnover ratio (also known as inventory turns) = cost of goods sold / Average Inventory = Cost of Goods Sold / ((Beginning Inventory + Ending Inventory) / 2)
and its inverse
Average Days to Sell Inventory = Number of Days a Year / Inventory Turnover Ratio = 365 days a year / Inventory Turnover Ratio
This ratio estimates how many times the inventory turns over a year. This number tells how much cash/goods are tied up waiting for the process and is a critical measure of process reliability and effectiveness.
So a factory with two inventory turns has six months stock on hand, which is generally not a good figure (depending upon the industry), whereas a factory that moves from six turns to twelve turns has probably improved effectiveness by 100%. This improvement will have some negative results in the financial reporting, since the 'value' now stored in the factory as inventory is reduced.
While these accounting measures of inventory are very useful because of their simplicity, they are also fraught with the danger of their own assumptions. There are, in fact, so many things that can vary hidden under this appearance of simplicity that a variety of 'adjusting' assumptions may be used. These include:
- Specific Identification
- Weighted Average Cost
- Moving-Average Cost
- FIFO and LIFO.
Inventory Turn is a financial accounting tool for evaluating inventory and it is not necessarily a management tool. Inventory management should be forward looking. The methodology applied is based on historical cost of goods sold. The ratio may not be able to reflect the usability of future production demand, as well as customer demand.
Business models, including Just in Time (JIT) Inventory, Vendor Managed Inventory (VMI) and Customer Managed Inventory (CMI), attempt to minimize on-hand inventory and increase inventory turns. VMI and CMI have gained considerable attention due to the success of third-party vendors who offer added expertise and knowledge that organizations may not possess.
Section word is discussed. | Top of Page!
Knowledge management (KM) comprises a range of strategies and practices used in an organization to identify, create, represent, distribute, and enable adoption of insights and experiences. Such insights and experiences comprise knowledge, either embodied in individuals or embedded in organizations as processes or practices.
An established discipline since 1991 (see Nonaka 1991), KM includes courses taught in the fields of business administration, information systems, management, and library and information sciences (Alavi & Leidner 1999). More recently, other fields have started contributing to KM research; these include information and media, computer science, public health, and public policy.
Many large companies and non-profit organizations have resources dedicated to internal KM efforts, often as a part of their business strategy, information technology, or human resource management departments (Addicott, McGivern & Ferlie 2006). Several consulting companies also exist that provide strategy and advice regarding KM to these organizations.
Knowledge management efforts typically focus on organizational objectives such as improved performance, competitive advantage, innovation, the sharing of lessons learned, integration and continuous improvement of the organization. KM efforts overlap with organizational learning, and may be distinguished from that by a greater focus on the management of knowledge as a strategic asset and a focus on encouraging the sharing of knowledge.
KM efforts have a long history, to include on-the-job discussions, formal apprenticeship, discussion forums, corporate libraries, professional training and mentoring programs. More recently, with increased use of computers in the second half of the 20th century, specific adaptations of technologies such as knowledge bases, expert systems, knowledge repositories, group decision support systems, intranets, and computer-supported cooperative work have been introduced to further enhance such efforts.
In 1999, the term personal knowledge management was introduced which refers to the management of knowledge at the individual level (Wright 2005).
In terms of the enterprise, early collections of case studies recognized the importance of knowledge management dimensions of strategy, process, and measurement (Morey, Maybury & Thuraisingham 2002). Key lessons learned included: people and the cultural norms which influence their behaviors are the most critical resources for successful knowledge creation, dissemination, and application.
Cognitive, social, and organizational learning processes are essential to the success of a knowledge management strategy; and measurement, benchmarking, and incentives are essential to accelerate the learning process and to drive cultural change. In short, knowledge management programs can yield impressive benefits to individuals and organizations if they are purposeful, concrete, and action-oriented.
More recently with the advent of the Web 2.0, the concept of Knowledge Management has evolved towards a vision more based on people participation and emergence. This line of evolution is termed Enterprise 2.0 (McAfee 2006). However, there is an ongoing debate and discussions (Lakhani & McAfee 2007) as to whether Enterprise 2.0 is just a fad that does not bring anything new or useful or whether it is, indeed, the future of knowledge management (Davenport 2008).
KM emerged as a scientific discipline in the earlier 1990s. It was initially supported solely by practitioners, when Skandia hired Leif Edvinsson of Sweden as the world’s first Chief Knowledge Officer (CKO).
Hubert Saint-Onge (formerly of CIBC, Canada), started investigating various sides of KM long before that. The objective of CKOs is to manage and maximize the intangible assets of their organizations. Gradually, CKOs became interested in not only practical but also theoretical aspects of KM, and the new research field was formed.
The KM ideas taken up by academics, such as Ikujiro Nonaka (Hitotsubashi University), Hirotaka Takeuchi (Hitotsubashi University), Thomas H. Davenport (Babson College) and Baruch Lev (New York University). In 2001, Thomas A. Stewart, former editor at FORTUNE Magazine and subsequently the editor of Harvard Business Review, published a cover story highlighting the importance of intellectual capital of organizations.
Since its establishment, the KM discipline has been gradually moving towards academic maturity. First, there is a trend towards higher cooperation among academics; particularly, there has been a drop in single-authored publications. Second, the role of practitioners has changed. Their contribution to academic research has been dratically declining from 30% of overall contributions up to 2002, to only 10% by 2009 (Serenko et al. 2010).
A broad range of thoughts on the KM discipline exists with no unanimous agreement; approaches vary by author and school. As the discipline matures, academic debates have increased regarding both the theory and practice of KM, to include the following perspectives:
Techno-centric with a focus on technology, ideally those that enhance knowledge sharing and creation.
Organizational with a focus on how an organization can be designed to facilitate knowledge processes best.
Ecological with a focus on the interaction of people, identity, knowledge, and environmental factors as a complex adaptive system akin to a natural ecosystem.
Regardless of the school of thought, core components of KM include people, processes, technology (or) culture, structure, technology, depending on the specific perspective (Spender & Scherer 2007). Different KM schools of thought include various lenses through which KM can be viewed and explained, to include:
community of practice (Wenger, McDermott & Synder 2001)
social network analysis
intellectual capital (Bontis & Choo 2002)
information theory (McInerney 2002)
constructivism (Nanjappa & Grant 2003)
The practical relevance of academic research in KM has been questioned (Ferguson 2005) with action research suggested as having more relevance (Andriessen 2004) and the need to translate the findings presented in academic journals to a practice (Booker, Bontis & Serenko 2008).
Different frameworks for distinguishing between different 'types of' knowledge exist. One proposed framework for categorizing the dimensions of knowledge distinguishes between tacit knowledge and explicit knowledge.
Tacit knowledge represents internalized knowledge that an individual may not be consciously aware of, such as how he or she accomplishes particular tasks. At the opposite end of the spectrum, explicit knowledge represents knowledge that the individual holds consciously in mental focus, in a form that can easily be communicated to others. (Alavi & Leidner 2001).
Similarly, Hayes and Walsham (2003) describe content and relational perspectives of knowledge and knowledge management as two fundamentally different epistemological perspectives. The content perspective suggest that knowledge is easily stored because it may be codified, while the relational perspective recognizes the contextual and relational aspects of knowledge which can make knowledge difficult to share outside of the specific location where the knowledge is developed.
Early research suggested that a successful KM effort needs to convert internalized tacit knowledge into explicit knowledge in order to share it, but the same effort must also permit individuals to internalize and make personally meaningful any codified knowledge retrieved from the KM effort. Subsequent research into KM suggested that a distinction between tacit knowledge and explicit knowledge represented an oversimplification and that the notion of explicit knowledge is self-contradictory.
Specifically, for knowledge to be made explicit, it must be translated into information (i.e., symbols outside of our heads) (Serenko & Bontis 2004). Later on, Ikujiro Nonaka proposed a model (SECI for Socialization, Externalization, Combination, Internalization) which considers a spiraling knowledge process interaction between explicit knowledge and tacit knowledge (Nonaka & Takeuchi 1995).
In this model, knowledge follows a cycle in which implicit knowledge is 'extracted' to become explicit knowledge, and explicit knowledge is 're-internalized' into implicit knowledge. More recently, together with Georg von Krogh, Nonaka returned to his earlier work in an attempt to move the debate about knowledge conversion forwards (Nonaka & von Krogh 2009).
A second proposed framework for categorizing the dimensions of knowledge distinguishes between embedded knowledge of a system outside of a human individual (e.g., an information system may have knowledge embedded into its design) and embodied knowledge representing a learned capability of a human body’s nervous and endocrine systems (Sensky 2002).
A third proposed framework for categorizing the dimensions of knowledge distinguishes between the exploratory creation of "new knowledge" (i.e., innovation) vs. the transfer or exploitation of "established knowledge" within a group, organization, or community. Collaborative environments such as communities of practice or the use of social computing tools can be used for both knowledge creation and transfer.
Knowledge may be accessed at three stages: before, during, or after KM-related activities. Different organizations have tried various knowledge capture incentives, including making content submission mandatory and incorporating rewards into performance measurement plans. Considerable controversy exists over whether incentives work or not in this field and no consensus has emerged.
One strategy to KM involves actively managing knowledge (push strategy). In such an instance, individuals strive to explicitly encode their knowledge into a shared knowledge repository, such as a database, as well as retrieving knowledge they need that other individuals have provided to the repository. This is also commonly known as the Codification approach to KM.
Another strategy to KM involves individuals making knowledge requests of experts associated with a particular subject on an ad hoc basis (pull strategy). In such an instance, expert individual(s) can provide their insights to the particular person or people needing this (Snowden 2002). This is also commonly known as the Personalization approach to KM.
Other knowledge management strategies and instruments for companies include:
rewards (as a means of motivating for knowledge sharing)
storytelling (as a means of transferring tacit knowledge)
after action reviews
knowledge mapping (a map of knowledge repositories within a company accessible by all)
communities of practice
expert directories (to enable knowledge seeker to reach to the experts)
best practice transfer
competence management (systematic evaluation and planning of competences of individual organization members)
proximity & architecture (the physical situation of employees can be either conducive or obstructive to knowledge sharing)
collaborative technologies (groupware, etc.)
knowledge repositories (databases, bookmarking engines, etc.)
measuring and reporting intellectual capital (a way of making explicit knowledge for companies)
knowledge brokers (some organizational members take on responsibility for a specific "field" and act as first reference on whom to talk about a specific subject)
social software (wikis, social bookmarking, blogs, etc.)
Inter-project knowledge transfer
A number of claims exist as to the motivations leading organizations to undertake a KM effort. Typical considerations driving a KM effort include:
Making available increased knowledge content in the development and provision of products and services
Achieving shorter new product development cycles
Facilitating and managing innovation and organizational learning
Leveraging the expertise of people across the organization
Increasing network connectivity between internal and external individuals
Managing business environments and allowing employees to obtain relevant insights and ideas appropriate to their work
Solving intractable or wicked problems
Managing intellectual capital and intellectual assets in the workforce (such as the expertise and know-how possessed by key individuals)
Debate exists whether KM is more than a passing fad, though increasing amount of research in this field may hopefully help to answer this question, as well as create consensus on what elements of KM help determine the success or failure of such efforts (Wilson 2002).
Early KM technologies included online corporate yellow pages as expertise locators and document management systems. Combined with the early development of collaborative technologies (in particular Lotus Notes), KM technologies expanded in the mid-1990s. Subsequent KM efforts leveraged semantic technologies for search and retrieval and the development of e-learning tools for communities of practice (Capozzi 2007).
Knowledge management systems can thus be categorized as falling into one or more of the following groups: Groupware, document management systems, expert systems, semantic networks, relational and object oriented databases, simulation tools, and artificial intelligence (Gupta & Sharma 2004)
More recently, development of social computing tools (such as bookmarks, blogs, and wikis) have allowed more unstructured, self-governing or ecosystem approaches to the transfer, capture and creation of knowledge. This, includes the development of new forms of communities, networks, or matrixed organizations.
However such tools for the most part are still based on text and code, and thus represent explicit knowledge transfer. These tools face challenges in distilling meaningful re-usable knowledge and ensuring that their content is transmissible through diverse channels (Andrus 2005).
Software tools in knowledge management are a collection of technologies and are not necessarily acquired as a single software solution. Furthermore, these knowledge management software tools have the advantage of using the organization existing information technology infrastructure.
Organizations and business decision makers spend a great deal of resources and make significant investments in the latest technology, systems and infrastructure to support knowledge management. It is imperative that these investments are validated properly, made wisely and that the most appropriate technologies and software tools are selected or combined to facilitate knowledge management.
Knowledge management has also become a cornerstone in emerging business strategies such as Service Lifecycle Management (SLM) with companies increasingly turning to software vendors to enhance their efficiency in industries including, but not limited to, the aviation industry.
"Knowledge manager" is a role and designation that has gained popularity over the past decade. The role has evolved drastically from that of one involving the creation and maintenance of knowledge repositories to one that involves influencing the culture of an organization toward improved knowledge sharing, reuse, learning, collaboration and innovation.
Knowledge management functions are associated with different departments in different organizations. It may be combined with Quality, Sales, HR, Innovation, Operations etc. and is likely to be determined by the KM motivation of that particular organization.
Knowledge managers have varied backgrounds ranging from Information Sciences to Business Management. An effective knowledge manager is likely to be someone who has a versatile skills portfolio and is comfortable with the concepts of organizational behavior/culture, processes, branding & marketing and collaborative technology.
Knowledge Management System
Knowledge Management System (KM System) refers to a (generally generated via or through to an IT based program/department or section) system for managing knowledge in organizations for supporting creation, capture, storage and dissemination of information. It can comprise a part (neither necessary nor sufficient) of a Knowledge Management initiative.
The idea of a KM system is to enable employees to have ready access to the organization's documented base of facts, sources of information, and solutions. For example a typical claim justifying the creation of a KM system might run something like this: an engineer could know the metallurgical composition of an alloy that reduces sound in gear systems.
Sharing this information organization wide can lead to more effective engine design and it could also lead to ideas for new or improved equipment.
A KM system could be any of the following:
- Document based i.e. any technology that permits creation/management/sharing of formatted documents such as Lotus Notes, SharePoint, web, distributed databases etc.
- Ontology/Taxonomy based: these are similar to document technologies in the sense that a system of terminologies (i.e. ontology) are used to summarize the document e.g. Author, Subject, Organization etc. as in DAML & other XML based ontologies
- Based on AI technologies which use a customized representation scheme to represent the problem domain.
- Provide network maps of the organization showing the flow of communication between entities and individuals
- Increasingly social computing tools are being deployed to provide a more organic approach to creation of a KM system.
KMS systems deal with information (although Knowledge Management as a discipline may extend beyond the information centric aspect of any system) so they are a class of information system and may build on, or utilize other information sources. Distinguishing features of a KMS can include:
- Purpose: a KMS will have an explicit Knowledge Management objective of some type such as collaboration, sharing good practice or the like.
- Context: One perspective on KMS would see knowledge is information that is meaningfully organized, accumulated and embedded in a context of creation and application.
- Processes: KMS are developed to support and enhance knowledge-intensive processes, tasks or projects of e.g., creation, construction, identification, capturing, acquisition, selection, valuation, organization, linking, structuring, formalization, visualization, transfer, distribution, retention, maintenance, refinement, revision, evolution, accessing, retrieval and last but not least the application of knowledge, also called the knowledge life cycle.
- Participants: Users can play the roles of active, involved participants in knowledge networks and communities fostered by KMS, although this is not necessarily the case. KMS designs are held to reflect that knowledge is developed collectively and that the “distribution” of knowledge leads to its continuous change, reconstruction and application in different contexts, by different participants with differing backgrounds and experiences.
- Instruments: KMS support KM instruments, e.g., the capture, creation and sharing of the codifiable aspects of experience, the creation of corporate knowledge directories, taxonomies or ontologies, expertise locators, skill management systems, collaborative filtering and handling of interests used to connect people, the creation and fostering of communities or knowledge networks.
A KMS offers integrated services to deploy KM instruments for networks of participants, i.e. active knowledge workers, in knowledge-intensive business processes along the entire knowledge life cycle. KMS can be used for a wide range of cooperative, collaborative, adhocracy and hierarchy communities, virtual organizations, societies and other virtual networks.
KMS can be used to manage media contents; activities, interactions and work-flows purposes; projects; works, networks, departments, privileges, roles, participants and other active users in order to extract and generate new knowledge and to enhance, leverage and transfer in new outcomes of knowledge providing new services using new formats and interfaces and different communication channels.
The term KMS can be associated to Open Source Software, and Open Standards, Open Protocols and Open Knowledge licenses, initiatives and policies.
Benefits & Issues of knowledge management
Some of the advantages claimed for KM systems are:
- Sharing of valuable organizational information throughout organizational hierarchy.Can avoid re-inventing the wheel, reducing redundant work.
- May reduce training time for new employees
- Retention of Intellectual Property after the employee leaves if such knowledge can be codified.
- time management
Knowledge Sharing remains a challenging issue for knowledge management, and while there is no clear agreement barriers may include time issues for knowledge works, the level of trust, lack of effective support technologies and culture (Jennex 2008).
Section word is discussed. | Top of Page!
Less Than Truckload (LTL)
Less than truckload (LTL) shipping is the transportation of relatively small freight. The alternatives to LTL carriers are parcel carriers or full truckload carriers. Parcel carriers usually handle small packages and freight that can be broken down in to units less than 150 pounds (68 kg). Full truckload carriers move freight that is loaded into a semi-trailer. Semi trailers are typically between 26 and 53 feet (7.92 and 16.15 m) and thereby require a substantial amount of freight to make such transportation economical.
Section word is discussed. | Top of Page!
The term Logistics Management or supply chain management is that part of Supply Chain Management that plans, implements, and controls the efficient, effective, forward, and reverse flow and storage of goods, services, and related information between the point of origin and the point of consumption in order to meet customers’ requirements.
Section word is discussed. | Top of Page!
Motor carrier may refer to:
- A company which employs large semi-truck and bus drivers. See Federal Motor Carrier Safety Administration (FMCSA).
- Motor Carrier an English automobile.
Section word is discussed. | Top of Page!
Online Transaction Processing
Online transaction processing, or OLTP, refers to a class of systems that facilitate and manage transaction-oriented applications, typically for data entry and retrieval transaction processing. The term is somewhat ambiguous; some understand a "transaction" in the context of computer or database transactions, while others (such as the Transaction Processing Performance Council) define it in terms of business or commercial transactions.
OLTP has also been used to refer to processing in which the system responds immediately to user requests. An automatic teller machine (ATM) for a bank is an example of a commercial transaction processing application.
OLAP (Online Analytical Processing) is a methodology to provide end users with access to large amounts of data in an intuitive and rapid manner to assist with deductions based on investigative reasoning.
Online Analytical Processing Online transaction processing increasingly requires support for transactions that span a network and may include more than one company. For this reason, new online transaction processing software uses client or server processing and brokering software that allows transactions to run on different computer platforms in a network.
In large applications, efficient OLTP may depend on sophisticated transaction management software (such as CICS) and/or database optimization tactics to facilitate the processing of large numbers of concurrent updates to an OLTP-oriented database.
For even more demanding Decentralized database systems, OLTP brokering programs can distribute transaction processing among multiple computers on a network. OLTP is often integrated into service-oriented architecture (SOA) and Web services.
Online Transaction Processing has two key benefits: simplicity and efficiency. Reduced paper trails and the faster, more accurate forecasts for revenues and expenses are both examples of how OLTP makes things simpler for businesses.
As with any information processing system, security and reliability are considerations. Online transaction systems are generally more susceptible to direct attack and abuse than their offline counterparts.
When organizations choose to rely on OLTP, operations can be severely impacted if the transaction system or database is unavailable due to data corruption, systems failure, or network availability issues. Additionally, like many modern online information technology solutions, some systems require offline maintenance which further affects the cost-benefit analysis.
Section word is discussed. | Top of Page!
Parcel may refer to:
- a package, sent through the mail or package delivery;
- parcel (consignment);
- land lot, a piece of land;
- fluid parcel, a concept in fluid dynamics;
an object used in the game "Pass the parcel".
Section word is discussed. | Top of Page!
A private carrier is a company that transports only their own goods. Usually the carrier's primary business is not transportation but rather something else. For example, the Wegmans grocery store chain owns and operates their own private fleet to deliver produce and goods to their stores. Wegmans' primary business is not transportation but grocery retail; therefore the Wegmans fleet is a private carrier. Other corporations with private fleets include Bridgestone-Firestone, Toyota, Ace Hardware, and Archer Daniels Midland. For more information about private fleets, see the U.S. association representing private motor truck fleets founded in 1939, the National Private Truck Council (see link below).
A private carrier is distinguished from a common carrier whose primary business is the transport of goods, and which is in business to serve any customers that hire them, such as buses, railroads, trucking companies, airlines and taxis. Private carriers may refuse to sell their services at their own discretion, whereas common carriers must treat all customers equally. Yellow Transportation and FedEx are examples of common motor carriers.
It is also distinguished from an independent carrier, which is an individual owner-operator or trucker who may make deals with private carriers, common carriers, contract carriers, or others as he or she wishes.
Although establishing and operating a private fleet is a substantial expense, it is warranted when customer service is paramount. Common carriers require shipment from fixed points while private carriers can set up any pickup or drop-off points desired. Some corporations mix both systems, using common carriers where possible and supplementing with private carriage (called a blended operation).
Private carriage usually refers to trucking, but is also found in rail and water transportation. Private rail carriers include the Black Mesa and Lake Powell Railroad in Arizona.
Beyond physical transportation, private carrier may also refer to communication or communication services. Certain frequencies which are restricted to use by law enforcement are sometimes called "private carriers" -- station class codes beginning with FB6 or FB7 are private carriers. In the telecommunications industry, defining "private carrier" and "common carrier" has become increasingly difficult with the growth of mobile phone service providers, VOIP, and other non-traditional means of delivering communication services.
Section word is discussed. | Top of Page!
Private electronic market
A private electronic market (PEM) uses the Internet to connect a limited number or pre-qualified buyers or sellers in one market. PEMs are a hybrid between perfectly open markets (e.g. exchanges where there is no pre-existing relationship between buyer and seller - similar to eBay) and closed contract negotiations (such as a sealed bid tender, where there is no visibility between competitors and hence no response to competition).
The core idea of PEMs is to create competition among buyers/sellers while allowing buyers/sellers to adjust all those aspects of the deal that are typically only dealt with in a negotiation. This creates a problem of "comparing apples and oranges": bids may be quite different in many dimensions and therefore cannot easily be compared.
Apart from the dimension of price these could include pre-negotiated discounts (e.g. for loyalty), specific qualities, combinations of goods and services with conditional pricing, freight differentials, contract fulfillment timing, payment terms, or deliberate constraints such as market share limits.
A Government owned agency in Australia (VicForests) regularly invites a number of saw mills to bid for native timber supply (forestauctions.com). The VicForests Private Electronic Market allows saw mills to specify exactly the volume they require, the quality, species, payment terms etc.
Saw mills can also create conditional bids such as "if I win x and y I am willing to pay more". Further, market participants are factored e.g. based on transport costs. In effect, a saw mill that is further away will have to bid more than one that is close by. Similarly, a bid for one particular lot may be the highest but a bundle created by another participant may still win based on higher total revenue.
Participants receive real-time feedback on where they stand with their current bids and are able to respond. Compared to the traditional sealed bid tender approach, VicForests' PEM resulted in a substantial revenue increase.
The overall effect of a well designed Private Electronic Market is what is described as allocative efficiency or in simple terms: a win-win for the seller (who maximizes revenue) and buyers (acquiring exactly what is of highest value to them). PEMs are based on game theory and combinatorial auction theory.
Section word is discussed. | Top of Page!
Procurement is the acquisition of goods and/or services. It is favorable that the goods/services are appropriate and that they are procured at the best possible cost to meet the needs of the purchaser in terms of quality and quantity, time, and location. Corporations and public bodies often define processes intended to promote fair and open competition for their business while minimizing exposure to fraud and collusion.
Section word is discussed. | Top of Page!
The noun product is defined as a "thing produced by labor or effort" or the "result of an act or a process", and stems from the verb produce, from the Latin prōdūce(re) '(to) lead or bring forth'. Since 1575, the word "product" has referred to anything
produced. Since 1695, the word has referred to "thing or things produced". The economic or commercial meaning of product was first used by political economist Adam Smith.
In marketing, a product is anything that can be offered to a market that might satisfy a want or need. In retailing, products are called merchandise. In manufacturing, products are purchased as raw materials and sold as finished goods. Commodities are usually raw materials such as metals and agricultural products, but a commodity can also be anything widely available in the open market.
In project management, products are the formal definition of the project deliverables that make up or contribute to delivering the objectives of the project. In insurance, the policies are considered products offered for sale by the insurance company that created the contract.
In general, product may refer to a single item or unit, a group of equivalent products, a grouping of goods or services, or an industrial classification for the goods or services.
A related concept is subproduct, a secondary but useful result of a production process.
Dangerous products, particularly a physical one, that cause injuries to consumers or bystanders may be subject to product liability.
Tangible and intangible products- Products can be classified as Tangibility,
tangible or a noun. A tangible product is any physical product that can be touched like a computer, automobile, etc. An intangible product is a non-physical product like an insurance policy.
Sears Roebuck model
In its online product catalog, retailer Sears, Roebuck and Company divides its products into "departments", then presents products to potential shoppers according to (1) function or (2)
brand. Each product has a Sears item-number and a manufacturer's model-number. Sears uses the departments and product groupings with the intention of helping customers browse products by function or brand within a traditional department-store
A product line is "a group of products that are closely related, either because they function in a similar manner, are sold to the same customer groups, are marketed through the same types of outlets, or fall within given price ranges." Many businesses offer a range of product lines which may be unique to a single organization or may be common across the business's industry.
In 2002 the US Census compiled revenue figures for the finance and insurance industry by various product lines such as "accident, health and medical insurance premiums" and "income from secured consumer loans". Within the insurance industry, product lines are indicated by the type of risk coverage, such as auto insurance, commercial insurance and life insurance.
National and international product classifications
Various classification systems for products have been developed for economic statistical purposes.
The NAFTA signatories are working on a system that classifies products called NAPCS as a companion to North American Industry Classification System
(NAICS). The European Union uses a "Classification of Products by Activity" among other product
classifications. The United Nations also classifies products for international economic activity
The Aspinwall Classification System classifies and rates products based on five variables:
- Replacement rate (How frequently is the product repurchased?)
- Gross margin (How much profit is obtained from each product?)
- Buyer goal adjustment (How flexible are the buyers' purchasing habits with regard to this product?)
- Duration of product satisfaction (How long will the product produce benefits for the user?)
- Duration of buyer search behavior (How long will consumers shop for the product?)
The National Institute of Governmental Purchasing (NIGP) developed a commodity and services classification system for use by state and local governments, the NIGP Code. The NIGP Code is used by 33 states within the United States as well as thousands of cities, counties and political subdivisions.
The NIGP Code is a hierarchical schema consisting of a 3 digit class, 5 digit class-item, 7 digit class-item-group and an 11 digit class-item-group-detail. Applications of the NIGP Code include vendor registration, inventory item identification, contract item management, spend analysis and strategic sourcing.
A manufacturer usually provides an identifier for each particular type of product they make, know as a model, model variant, or model number.
For example, Dyson Ltd, a manufacturer of appliances (mainly vacuum cleaners), requires customers to identify their model in the support section of the website. Brand and model can be used together to identify products in the market. The model number is not necessarily the same as the manufacturer part number (MPN).
A specific unit of a product is usually identified by a serial number.
Section word is discussed. | Top of Page!
A service is the intangible equivalent of an economic good. Service provision is often an economic activity where the buyer does not generally, except by exclusive contract, obtain exclusive ownership of the thing purchased.
The benefits of such a service, if priced, are held to be; p.p. By composing and orchestrating the appropriate level of resources, skill, ingenuity, and experience for effecting specific benefits for service consumers, service providers participate in an economy without the restrictions of carrying stock (inventory) or the need to concern themselves with bulky raw materials.
On the other hand, their investment in expertise does require consistent service marketing and upgrading in the face of competition which has equally few physical restrictions. Many so-called services, however, require large physical structures and equipment, and consume large amounts of resources, such as transportation services and the military.
Services can be paraphrased in terms of their generic key characteristics.
Services are intangible and insubstantial: they cannot be touched, gripped, handled, looked at, smelled, tasted or heard. Thus, there is neither potential nor need for transport, storage or stocking of services. Furthermore, a service cannot be (re)sold or owned by somebody, neither can it be turned over from the service provider to the service consumer nor returned from the service consumer to the service provider. Solely, the service delivery can be commissioned to a service provider who must generate and render the service at the distinct request of an authorized service consumer.
2. Perish ability
Services are perishable in two regards
- The service relevant resources, processes and systems are assigned for service delivery during a definite period in time. If the designated or scheduled service consumer does not request and l, l. An empty seat on a plane never can be utilized and charged after departure.
- When the service has been completely rendered to the requesting service consumer, this particular service irreversibly vanishes as it has been consumed by the service consumer. Example: the passenger has been transported to the destination and cannot be transported again to this location at this point in time.
The service provider is indispensable for service delivery as he must promptly generate and render the service to the requesting service consumer. In many cases the service delivery is executed automatically but the service provider specific assign resources and systems and actively keep up appropriate service delivery readiness and capabilities.
Additionally, the service consumer is inseparable from service delivery because he is involved in it from requesting it up to consuming the rendered benefits. Examples: The service consumer must sit in the hair dresser's shop & chair or in the plane & seat; correspondingly, the hair dresser or the pilot must be in the same shop or plane, respectively, for delivering the service.
Services are rendered and consumed during the same period of time. As soon as the service consumer has requested the service (delivery), the particular service must be generated from scratch without any delay and friction and the service consumer instantaneously consumes the rendered benefits for executing his upcoming activity or task.
Each service is unique. It is one-time generated, rendered and consumed and can never be exactly repeated as the point in time, location, circumstances, conditions, current configurations and/or assigned resources are different for the next delivery, even if the same service consumer requests the same service. Many services are regarded as heterogeneous or lacking homogeneity and are typically modified for each service consumer or each new situation (customized).
Example: The taxi service which transports the service consumer from his home to the opera is different from the taxi service which transports the same service consumer from the opera to his home – another point in time, the other direction, maybe another route, probably another taxi driver and cab.
Each of these characteristics is retractable per se and their inevitable variations complicates the consistent service conception and make service delivery a challenge in each and every case. Proper service marketing requires creative visualization to effectively evoke a concrete image in the service consumer's mind. From the service consumer's point of view, these characteristics make it difficult, or even impossible, to evaluate or compare services prior to experiencing the service delivery.
Mass generation and delivery of services is very difficult. This can be seen as a problem of inconsistent service quality. Both inputs and outputs to the processes involved providing services are highly variable, as are the relationships between these processes, making it difficult to maintain consistent service quality. For many services there is labor intensity as services usually involve considerable human activity, rather than a precisely determined process; exceptions include utilities. Human resource management is important.
The human factor is often the key success factor in service economies. It is difficult to achieve economies of scale or gain dominant market share. There are demand fluctuations and it can be difficult to forecast demand. Demand can vary by season, time of day, business cycle, etc.
There is consumer involvement as most service provision requires a high degree of interaction between service consumer and service provider. There is a customer-based relationship based on creating long-term business relationships. Accountants, attorneys, and financial advisers maintain long-term relationships with their clients for decades. These repeat consumers refer friends and family, helping to create a client-based relationship.
In economics and accounting, a good is a product that can be used to satisfy some desire or need. More narrowly but commonly, a good is a tangible physical product that can be contrasted with a service which is intangible. As such, it is capable of being delivered to a purchaser and involves the transfer of ownership from seller to customer.
For example, an apple is a tangible good, as opposed to a haircut, which is an (intangible) service. One usage that preserves the distinction between goods and services by including both is commodity. In microeconomics, a 'good' is often used in this inclusive sense of the word (Milgate, 1987).
Utility characteristics of goods
A good is any object that increases the utility of the consumer/ product directly or indirectly. Goods are usually modeled as having diminishing marginal utility. Some things are useful, but not scarce enough to have monetary value, such as the Earth's atmosphere, these are referred to as 'free goods'.
In economics, a bad is the opposite of a good. Ultimately, whether an object is a good or a bad depends on each individual consumer and therefore, it is important to realize that not all goods are good all the time and not all goods are goods to all people.
Types of goods
Goods can be defined in a variety of ways, depending on a number a characteristics as listed below.
Goods and services have price elasticity. An elastic good is one for which there is a substitute good; for example, as pen prices rise, the cross elasticity of demand would result in consumers buying more pencils instead. An inelastic good is one for which there are few or no substitutes, such as tickets to major sporting events or original works by famous artists.
Section word is discussed. | Top of Page!
In business, the term word sourcing refers to a number of procurement practices, aimed at finding, evaluating and engaging suppliers of goods and services.
Section word is discussed. | Top of Page!
Supply Chain Management
Supply chain management (SCM) is the management of a network of interconnected businesses involved in the ultimate provision of product and service packages required by end customers (Harland,
1996). Supply chain management spans all movement and storage of raw materials, work-in-process inventory, and finished goods from point of origin to point of consumption (supply chain).
Another definition is provided by the APICS Dictionary when it defines SCM as the "design, planning, execution, control, and monitoring of supply chain activities with the objective of creating net value, building a competitive infrastructure, leveraging worldwide logistics, synchronizing supply with demand and measuring performance globally."
More common and accepted definitions of supply chain management are:
- Supply chain management is the systematic, strategic coordination of the traditional business functions and the tactics across these business functions within a particular company and across businesses within the supply chain, for the purposes of improving the long-term performance of the individual companies and the supply chain as a whole.
- A customer focused definition is given by Hines (2004:p76) "Supply chain strategies require a total systems view of the linkages in the chain that work together efficiently to create customer satisfaction at the end point of delivery to the consumer. As a consequence costs must be lowered throughout the chain by driving out unnecessary costs and focusing attention on adding value. Throughout efficiency must be increased, bottlenecks removed and performance measurement must focus on total systems efficiency and equitable reward distribution to those in the supply chain adding value. The supply chain system must be responsive to customer requirements."
- Global supply chain forum - supply chain management is the integration of key business processes across the supply chain for the purpose of creating value for customers and stakeholders.
- According to the Council of Supply Chain Management Professionals (CSCMP), supply chain management encompasses the planning and management of all activities involved in sourcing , procurement , conversion, and logistics management . It also includes the crucial components of coordination and collaboration with channel partners, which can be suppliers, intermediaries, third-party service providers, and customers. In essence, supply chain management integrates supply and demand management within and across companies. More recently, the loosely coupled, self-organizing network of businesses that cooperate to provide product and service offerings has been called the Extended Enterprise.
A supply chain, as opposed to supply chain management, is a set of organizations directly linked by one or more of the upstream and downstream flows of products, services, finances, and information from a source to a customer. Managing a supply chain is 'supply chain management'.
Supply chain management software includes tools or modules used to execute supply chain transactions, manage supplier relationships and control associated business processes.
Supply chain event management (abbreviated as SCEM) is a consideration of all possible events and factors that can disrupt a supply chain. With SCEM possible scenarios can be created and solutions devised.
Supply chain management must address the following problems:
- Distribution Network Configuration: number, location and network missions of suppliers, production facilities, distribution centers, warehouses, cross-docks and customers.
- Distribution Strategy: questions of operating control (centralized, decentralized or shared); delivery scheme, e.g., direct shipment , pool point shipping, cross docking , DSD (direct store delivery), closed loop shipping; mode of transportation, e.g., motor carrier , including truckload, LTL , parcel ; railroad; intermodal transport, including TOFC (trailer on flatcar) and COFC (container on flatcar); ocean freight; airfreight; replenishment strategy (e.g., pull, push or hybrid); and transportation control (e.g., owner-operated, private carrier , common carrier , contract carrier, or 3PL).
- Trade-Offs in Logistical Activities: The above activities must be well coordinated in order to achieve the lowest total logistics cost. Trade-offs may increase the total cost if only one of the activities is optimized. For example, full truckload (FTL) rates are more economical on a cost per pallet basis than less than truckload (LTL) shipments. If, however, a full truckload of a product is ordered to reduce transportation costs, there will be an increase in inventory holding costs which may increase total logistics costs. It is therefore imperative to take a systems approach when planning logistical activities. These trade-offs are key to developing the most efficient and effective Logistics and SCM strategy.
- Information: Integration of processes through the supply chain to share valuable information, including demand signals, forecasts, inventory, transportation, potential collaboration, etc.
- Inventory Management: Quantity and location of inventory, including raw materials, work-in-progress (WIP) and finished goods.
Cash-Flow: Arranging the payment terms and methodologies for exchanging funds across entities within the supply chain.
Supply chain execution means managing and coordinating the movement of materials, information and funds across the supply chain. The flow is bi-directional.
Section word is discussed. | Top of Page!
Telecommunication is the transmission of information over significant distances to communicate. In earlier times, telecommunications involved the use of visual signals, such as beacons, smoke signals, semaphore telegraphs, signal flags, and optical heliographs, or audio messages via coded drumbeats, lung-blown horns, or sent by loud whistles, for example.
In the modern age of electricity and electronics, telecommunications now also includes the use of electrical devices such as telegraphs, telephones, and teleprinters, the use of radio and microwave communications, as well as fiber optics and their associated electronics, plus the use of the orbiting satellites and the Internet.
A revolution in wireless telecommunications began in the first decade of the 20th century with pioneering developments in wireless radio communications by Nikola Tesla and Guglielmo Marconi. Marconi won the Nobel Prize in Physics in 1909 for his efforts.
Other highly notable pioneering inventors and developers in the field of electrical and electronic telecommunications include Charles Wheatstone and Samuel Morse (telegraph), Alexander Graham Bell (telephone), Edwin Armstrong, and Lee de Forest (radio), as well as John Logie Baird and Philo Farnsworth (television).
The world's effective capacity to exchange information through two-way telecommunication networks grew from 281 petabytes of (optimally compressed) information in 1986, to 471 petabytes in 1993, to 2.2 (optimally compressed) exabytes in 2000, and to 65 (optimally compressed) exabytes in 2007. This is the informational equivalent of 2 newspaper pages per person per day in 1986, and 6 entire newspapers per person per day by 2007.
Given this growth, telecommunications play an increasingly important role in the world economy and the worldwide telecommunication industry's revenue was estimated to be $3.85 trillion in 2008. The service revenue of the global telecommunications industry was estimated to be $1.7 trillion in 2008, and is expected to touch $2.7 trillion by 2013. Back to paragraph
Section word is discussed. | Top of Page!
The term virtual is a concept applied in many fields with somewhat differing connotations, and also, differing denotations.
The term has been defined in philosophy as "that which is not real" but may display the salient qualities of the real.
Colloquially, virtual is used to mean almost, particularly when used in the adverbial form e.g. "That's virtually [almost] impossible".
Extension to the original philosophical definition, the term virtual has also come to mean "modeling through the use of a computer," where the computer models a physical equivalent. Thus, a virtual world models the real world with 3D structures and virtual reality seeks to model reality, enhancing a virtual world with mechanisms for eye and hand movements.
The word 'virtual' now modifies numerous nouns for real world concepts: virtual appliance, virtual museum, virtual learning environment (VLE), virtual studio, and so on. The use of the word virtual for computer simulation of reality is not recent. The Online Etymology Dictionary reports that the sense of "not physically existing but made to appear by software" appears as early as 1959.
Section word is discussed. | Top of Page!
Wire transfer or credit transfer is a method of electronic funds transfer from one person or institution (entity) to another. A wire transfer can be made from one bank account to another bank account or through a transfer of cash at a cash office. Wire transfer systems are intended to provide more individualized transactions than bulk payment systems such as ACH and Check21.
Different wire transfer systems and operators provide a variety of options relative to the immediacy and finality of settlement and the cost, value, and volume of transactions. Central bank wire transfer systems, such as the Federal Reserve's FedWire system in the United States are more likely to be Real time gross settlement (RTGS) systems.
RTGS systems provide the quickest availability of funds because they provide immediate "real-time" and final "irrevocable" settlement by posting the gross (complete) entry against electronic accounts of the wire transfer system operator. Other systems such as CHIPS provide net settlement on a periodic basis.
More immediate settlement systems tend to process higher monetary value time-critical transactions, have higher transaction costs, and a smaller volume of payments. Currency transaction risk (because of market fluctuations) may be reduced (in part) by immediacy of settlement.
Section word is discussed. | Top of Page!
World Wide Web
The World Wide Web (abbreviated as WWW or W3, and commonly known as the Web) is a system of interlinked hypertext documents accessed via the Internet. With a web browser, one can view web pages that may contain text, images, videos, and other multimedia and navigate between them via hyperlinks.
Using concepts from earlier hypertext systems, British engineer and computer scientist Sir Tim Berners-Lee, now Director of the World Wide Web Consortium (W3C), wrote a proposal in March 1989 for what would eventually become the World Wide Web.
At CERN in Geneva, Switzerland, Berners-Lee and Belgian computer scientist Robert Cailliau proposed in 1990 to use hypertext "... to link and access information of various kinds as a web of nodes in which the user can browse at will", and they publicly introduced the project in December.
"The World-Wide Web was developed to be a pool of human knowledge, and human culture, which would allow collaborators in remote sites to share their ideas and all aspects of a common project."
The NeXT Computer used by Berners-Lee. The hand label declares, "This machine is a server. DO NOT POWER IT DOWN!!"
In the May 1970 issue of Popular Science magazine Arthur C. Clarke was reported to have predicted that satellites would one day "bring the accumulated knowledge of the world to your fingertips" using a console that would combine the functionality of the Xerox, telephone, television and a small computer, allowing data transfer and video conferencing around the globe.
In March 1989, Tim Berners-Lee wrote a proposal that referenced ENQUIRE, a database and software project he had built in 1980, and described a more elaborate information management system.
With help from Robert Cailliau, he published a more formal proposal (on November 12, 1990) to build a "Hypertext project" called "WorldWideWeb" (one word, also "W3") as a "web" of "hypertext documents" to be viewed by "browsers" using a client–server architecture.
This proposal estimated that a read-only web would be developed within three months and that it would take six months to achieve "the creation of new links and new material by readers, [so that] authorship becomes universal" as well as "the automatic notification of a reader when new material of interest to him/her has become available." While the read-only goal was met, accessible authorship of web content took longer to mature, with the wiki concept, blogs, Web 2.0 and RSS/Atom.
The proposal was modeled after the Dynatext SGML reader by Electronic Book Technology, a spin-off from the Institute for Research in Information and Scholarship at Brown University. The Dynatext system, licensed by CERN, was technically advanced and was a key player in the extension of SGML ISO 8879:1986 to Hypermedia within HyTime, but it was considered too expensive and had an inappropriate licensing policy for use in the general high energy physics community, namely a fee for each document and each document alteration.
A NeXT Computer was used by Berners-Lee as the world's first web server and also to write the first web browser, WorldWideWeb, in 1990. By Christmas 1990, Berners-Lee had built all the tools necessary for a working Web: the first web browser (which was a web editor as well); the first web server; and the first web pages, which described the project itself. On August 6, 1991, he posted a short summary of the World Wide Web project on the alt.hypertext newsgroup.
This date also marked the debut of the Web as a publicly available service on the Internet. The first photo on the web was uploaded by Berners-Lee in 1992, an image of the CERN house band Les Horribles Cernettes.
The first server outside Europe was set up at SLAC to host the SPIRES-HEP database. Accounts differ substantially as to the date of this event. The World Wide Web Consortium says December 1992, whereas SLAC itself claims 1991. This is supported by a W3C document entitled A Little History of the World Wide Web.
The crucial underlying concept of hypertext originated with older projects from the 1960s, such as the Hypertext Editing System (HES) at Brown University, Ted Nelson's Project Xanadu, and Douglas Engelbart's oN-Line System (NLS). Both Nelson and Engelbart were in turn inspired by Vannevar Bush's microfilm-based "memex", which was described in the 1945 essay "As We May Think".
Berners-Lee's breakthrough was to marry hypertext to the Internet. In his book Weaving The Web, he explains that he had repeatedly suggested that a marriage between the two technologies was possible to members of both technical communities, but when no one took up his invitation, he finally tackled the project himself. In the process, he developed three essential technologies:
- a system of globally unique identifiers for resources on the Web and elsewhere, the Universal Document Identifier (UDI), later known as Uniform Resource Locator (URL) and Uniform Resource Identifier (URI);
- the publishing language HyperText Markup Language (HTML);
- the Hypertext Transfer Protocol (HTTP).
The World Wide Web had a number of differences from other hypertext systems that were then available. The Web required only unidirectional links rather than bidirectional ones. This made it possible for someone to link to another resource without action by the owner of that resource. It also significantly reduced the difficulty of implementing web servers and browsers (in comparison to earlier systems), but in turn presented the chronic problem of link rot.
Unlike predecessors such as HyperCard, the World Wide Web was non-proprietary, making it possible to develop servers and clients independently and to add extensions without licensing restrictions. On April 30, 1993, CERN announced that the World Wide Web would be free to anyone, with no fees due.
Coming two months after the announcement that the server implementation of the Gopher protocol was no longer free to use, this produced a rapid shift away from Gopher and towards the Web. An early popular web browser was ViolaWWW for Unix and the X Windowing System.
Scholars generally agree that a turning point for the World Wide Web began with the introduction of the Mosaic web browser in 1993. A graphical browser developed by a team at the National Center for Supercomputing Applications at the University of Illinois at Urbana-Champaign (NCSA-UIUC), led by Marc Andreessen, funding for Mosaic came from the U.S. High-Performance Computing and Communications Initiative and the High Performance Computing and Communication Act of 1991, one of several computing developments initiated by U.S. Senator Al Gore.
Prior to the release of Mosaic, graphics were not commonly mixed with text in web pages and the Web's popularity was less than older protocols in use over the Internet, such as Gopher and Wide Area Information Servers (WAIS). Mosaic's graphical user interface allowed the Web to become, by far, the most popular Internet protocol.
The World Wide Web Consortium (W3C) was founded by Tim Berners-Lee after he left the European Organization for Nuclear Research (CERN) in October 1994. It was founded at the Massachusetts Institute of Technology Laboratory for Computer Science (MIT/LCS) with support from the Defense Advanced Research Projects Agency (DARPA), which had pioneered the Internet; a year later, a second site was founded at INRIA (a French national computer research lab) with support from the European Commission DG InfSo; and in 1996, a third continental site was created in Japan at Keio University.
By the end of 1994, while the total number of websites was still minute compared to present standards, quite a number of notable websites were already active, many of which are the precursors or inspiration for today's most popular services.
Connected by the existing Internet, other websites were created around the world, adding international standards for domain names and HTML. Since then, Berners-Lee has played an active role in guiding the development of web standards (such as the markup languages in which web pages are composed), and in recent years has advocated his vision of a Semantic Web.
The World Wide Web enabled the spread of information over the Internet through an easy-to-use and flexible format. It thus played an important role in popularizing use of the Internet. Although the two terms are sometimes conflated in popular use, World Wide Web is not synonymous with Internet. The Web is a collection of documents and both client and server software using Internet protocols such as TCP/IP and HTTP.
Section word is discussed. | Top of Page!