Do you ever want a Coke when you just want a soft drink? Do you call the generic tissue Kleenex? Have you ever asked someone to copy something for you even though your copier is not made by Xerox? If you answered yes, you’re not alone. Such synecdochic language is common when a brand name becomes so ubiquitous that it becomes almost a generic term; it’s a curse of success.
live crypto prices
Today, we see the same phenomena happening in the world of MP3 players and Apple’s hugely successful iPod. MP3 players of all makes and models are often referred to as iPods, although in many cases they are not. Such generalizations not only blur the lines between manufacturers, but the different models of original Apple iPods also become confusing. With three models on the market, the iPod or iPod video as it is sometimes called, the iPod nano and the iPod shuffle; Apple’s current iPod offering provides consumers with choices to fit their needs and budget.
flights trivago
The smallest and cheapest of the iPods is the iPod shuffle. At a size that’s slightly larger than your thumb, iPod shuffle is the ultimate in truly portable music. With the included clip, it can be attached to your clothes, hat or just about anywhere. With one gigabyte (GB) of memory, it can hold up to 240 songs that shuffle to give you a new music experience whenever you want.
all-inclusive vacation packages with airfare
While shuffle provides a lot of power for some people, others may want the ability to store more music and have more control over the songs they’ve downloaded to their iPods. iPod nano can hold up to 8 GB of data; this means that up to 2000 songs can be stored, ready to be played at any time. Using simple and innovative touch controls, nano can help you select songs you want to hear or shuffle songs if you like. Games, podcasts, audiobooks, photos, contact lists and even your calendar with a schedule can be at your fingertips with nano; average price and feature options among the current crop of iPods.
trivago
The iPod is the largest and most powerful of all iPods. At four inches long and two and a half inches wide, it’s hardly large by most standards, but the extra size it has compared to other iPods gives it the ability to not only play music, but to play videos and movies as well. 80GB of memory lets you store up to 20,000 songs, so you have plenty of room to download movies and TV shows in addition to all the features you find on the nano.
cheap plane tickets
iPods demonstrated the flexibility in size, price and features, all in the easy-to-use format that made Apple products so popular. No wonder they have become synonymous with all MP3 players. That being said, it’s hard not to conclude that iPods by any other name aren’t iPods, even if everyone calls them that.
buy airline tickets
Why is the price of nano rising?
The ultimate Crypto-IoT collaboration
This was announced by the Nanocenter yesterday. The launch of the Nano IoT charger has been confirmed by the company and represents the most promising entry into the IoT industry. The IoT charger and connected device are said to work with the native Nano coin.
expedia
The charger that has been released as of now is still a prototype and the features that have been revealed so far that the charger hosts are:
- It only requires the user to scan a QR code for transactions
- The QR code initiates a managed transaction process at the micro level; hence no need to worry about change
- For now, it is only compatible with NANO wallets
This announcement from the Nano Center is a huge milestone for both the company and the community, as even when the charger is successfully tested and brought to market, there are many real-world use cases that can be served. Of course, to do this for real-world use cases, there are many potential uses and multiple iterations that will need to be done beforehand.
cheap airline tickets
Nano branches
Nano is not the first coin and cryptocurrency company to venture into the IoT space, in fact it is the third or fourth. Behind IOTA and several other coins that are also in the space at the moment. However, the excitement and anticipation surrounding this coin is due to the fact that before embarking on this journey, the company did market research in a very interesting way.
booking hotels
The Nano company used its YouTube channel to begin engaging with its community, and the feedback it received through it was invaluable to the company. The community reaction to the YouTube channel and its content made it very clear to the company what was expected of them and what they needed to do.
airline tickets
The company with this move to diversify its business has the potential with this product to become a global brand in the IoT industry. The company’s NANO coin is also starting to be listed on many, many exchanges. Which is a very serious reason for the great acceptance of the coin and the increase in the prices in the market.
sw airlines reservations
Price rise, more expected
continental airlines official site
In the last week, the price of Nano has seen more than 90% increase in its value. And a 250% increase in the last two weeks. The price two weeks ago was $1.52 USD. The price/value increase is due to multiple reasons. Among them is the fact that the company’s NANO coin is listed and accepted by many exchanges and platforms as a transactional crypto coin.
The other two reasons are pretty intuitive from this article, they are that the company has diversified into the IoT jungle with their product to boot and the fact that their initial network stress test came back with flying colors. It is very important for any crypto company to have the support of the community and it is safe to say that NANO has. The current price of the coin at the time of writing is $3.12 USD.
trivago vacation packages
HIV-AIDS – Immunity, Eradication and Its Disappearing Victims
Human immunodeficiency virus (HIV), the retrovirus responsible for acquired immune deficiency syndrome (AIDS) has been around since between 1884 and 1924 (while lentiviruses, the genus to which HIV belongs, have existed for over 14 million years) when it entered the human population from a chimpanzee in southeastern Cameroon during a period of rapid urbanization. At the time, no one noticed nor knew that it would result in one of the deadliest pandemics. Nor was anyone aware that some would possess a natural immunity, a cure would remain elusive a decade into the 21st century, and a significant number of deceased victims would be purged from mortality statistics distorting the pandemic’s severity.
As the number of cases spread from Cameroon to neighboring countries, namely the Democratic Republic of Congo (DRC), Gabon, Equatorial Guinea, and the Central African Republic, they drew little attention even as victims died in scattered numbers from a series of complications (e.g. Pneumocystis pneumonia (PCP), Kaposi’s sarcoma, etc.) later attributed to AIDS. This was likely because of Africa’s limited interaction with the developed world until the widespread use of air travel, the isolated, low incidence of cases, HIV’s long incubation period (up to 10 years) before the onset of AIDS, and the absence of technology, reliable testing methods and knowledge surrounding the virus. The earliest confirmed case based on ZR59, a blood sample taken from a patient in Kinshasha, DRC dates back to 1959.
The outbreak of AIDS finally gained attention on June 5, 1981 after the U.S. Centers for Disease Control (CDC) detected a cluster of deaths from PCP in Los Angeles and New York City. By August 1982, as the incidence of cases spread, the CDC referred to the outbreak as AIDS. The responsible retrovirus, HIV, was isolated nearly a year later (May 1983) by researchers from the Pasteur Institute in France and given its official name in May 1986 by the International Committee on Taxonomy of Viruses. During this period, HIV-related mortality rates rose steadily in the United States peaking in 1994-1995.
HIV:
HIV is spherical in shape and approximately 120 nanometers (nm) in diameter (or 60 times smaller than a red blood cell). It is composed of two copies of single-stranded convoluted RNA surrounded by a conical capsid and lipid membrane that prevents antibodies from binding to it. HIV also consists of glycoprotein (gp120 and gp41) spikes and is a highly mutating virus. Its genome changes by as much as 1% each year, significantly faster than “killer” cytotoxic T-Cells (CD8+) can adapt. It is transmitted through bodily fluids.
Per CD4 Cell Tests (Fact Sheet Number 124, AIDS InfoNet, 21 March 2009), when “HIV infects humans” it infects “helper” T-4 (CD4) cells that are critical in resisting infections. HIV does so by merging its genetic code with that of T-4 (CD4) cells. HIV’s spikes stick to the surface of T-4 (CD4) cells enabling its viral envelope to fuse with their membrane. Once fused, HIV pastes its contents into the DNA of T-4 (CD4) cells with the enzyme, integrase, so that each time T-4 (CD4) cells replicate, they produce additional “copies of HIV,” reducing the count of healthy T-4 (CD4) cells. Then as healthy T-4 (CD4) cells, which come in millions of families geared towards specific pathogens are eliminated, the body is rendered defenseless against the pathogens “they were designed” to fight until ultimately, the immune system is overwhelmed.
When the T-4 (CD4) cell count drops below 200 cells per cubic mm of blood (or a percentage of? 14% of total lymphocytes; normal counts range from 500-1600 or 30%-60% of lymphocytes), indicative of serious immune system damage, the victim is deemed to have AIDS (“the end point of an infection that is continuous, progressive and pathogenic per Richard Hunt, MD (Human Immunodeficiency Virus And AIDS Statistics, Virology – Chapter 7, Microbiology and Immunology On-line (University of South Carolina School of Medicine, 23 February 2010)) and is vulnerable to a multitude of opportunistic infections. Examples are PCP, a fungal infection that is a major killer of HIV-positive persons, Kaposi’s sarcoma, a rare form of cancer, toxoplasmosis, a parasitic infection that attacks the brain and other parts of the body and cryptococcosis, a fungal infection that attacks the brain and spinal cord (both usually occur when the T-4 (CD4) cell count drops below 100), and mycobacterium avium complex (MAC), a bacterial infection that can be localized to a specific organ (usually the bone marrow, intestines, liver, or lungs) or widespread, in which case it is referred to as disseminated mycobacterium avium complex (DMAC) (which often occurs when the T-4 (CD4) cell count drops below 50).
Natural Immunity:
Since the onset of the HIV/AIDS pandemic in 1981 cases of people with a natural immunity to HIV have been documented. Although these persons, called long-term non-progressors (LTNPs) are infected with HIV, they never develop AIDS. When LTNPs are infected, some suffer an initial drop in their T-4 (CD4) cell count. However, when their T-4 (CD4) cell count reaches around 500 it stabilizes and never drops again preventing the onset of AIDS. Furthermore, while CD8+ T-Cells (even in large numbers) are ineffective against HIV-infected T-4 (CD4) cells in progressors (persons without a natural immunity to HIV), the National Institutes of Health (NIH) reported in a December 4, 2008 press release that “CD8+ T-Cells taken from LTNPs [can efficiently] kill HIV-infected cells in less than [an] hour” in which “a protein, perforin (produced only in negligible amounts in progressors), manufactured by their CD8+ T-Cells punches holes in the infected cells” enabling a second protein, “granzyme B” to penetrate and kill them.
Per Genetic HIV Resistance Deciphered (Med-Tech, 7 January 2005) the roots of this immunity dates back a thousand years due to “a pair of mutated genes – one in each chromosome – that prevent their immune cells from developing [Chemokine (C-C motif) receptor 5 (CCR5) receptors] that let [HIV penetrate].” This mutation likely evolved to provide added protection against smallpox according to Alison Galvani, professor of epidemiology at Yale University. Based on the latest scientific evidence, the mutated CCR5 gene (also called delta 32 because of the absence or deletion of 32 amino acids from its cytokine receptor) located in Th2 cells, developed in Scandinavia and progressed southward to central Asia as the Vikings expanded their influence. Consequently up to 1% of Northern Europeans (with Swedes being in the majority) followed by a similar percentage of Central Asians have this mutation, which if inherited from both parents provides them total immunity while another 10-15% of Northern Europeans and Central Asians having inherited the mutation from one parent exhibit greater resistance in lieu of complete immunity to HIV.
At the same time, even though the CCR5 mutation is absent in Africans, a small also exhibit percentage natural immunity (possibly developed through exposure) to HIV/AIDS – CD8+ T-Cell generation that effectively kills HIV-infected cells and mutated human leukocyte group A (HLA) antigens that coat the surface of their T-4 (CD4) cells to prevent HIV from penetrating based on an intensive study of 25 Nairobi prostitutes who per The Amazing Cases of People with Natural Immunity against HIV (Softpedia, 27 June 2007) have “had sex with hundreds, perhaps thousands of HIV-positive clients” and shown no sign of contracting HIV.
In addition, people with larger numbers of the CCL3L1 gene that produces cytokines (proteins that “gum” up CCR5 receptors) to prevent HIV from entering their T-4 (CD4) cells, per Genetic HIV Resistance Deciphered have greater resistance to HIV in comparison to others within their ethnic group that possess lesser quantities of the CCL3L1 gene and get “sick as much as 2.6 times faster.”
At the same time, up to 75% of newborn babies also possess natural immunity (for reasons still not known) when exposed to HIV-positive blood. Although born with HIV antibodies – thus HIV-positive, newborns “usually lose HIV antibodies acquired from their HIV-positive mothers within 12-16 – maximum 18 months,” in which their “spontaneous loss of [HIV] antibodies” without medical intervention is called seroreversion. “However, with the exception of very few instances, these infants are not HIV-infected” conclusive proof of a natural immunity to HIV.[1] Furthermore, when pregnant HIV-positive women are administered highly active antiretroviral therapy (HAART), which lowers the viral concentration of HIV in their blood, an astonishing 97% of their newborns lose their HIV antibodies through seroreversion to become HIV-free per the Eunice Kennedy Shriver National Institute of Child Health and Human Development (NICHD) as posted under Surveillance Monitoring for ART Toxicities Study in HIV-Uninfected Children Born to HIV-Infected Mothers (SMARTT) (Clinical Trials.gov, 29 March 2008). However, at this time, it is not known if these newborns retain their natural immunity throughout their lives.
Eradication:
With a cure perhaps unattainable, eradication of HIV/AIDS in the same way as smallpox (with no cure) was eliminated, may be the most feasible option. According to Dr. Brian Williams of the South African Centre for Epidemiological Modelling and Analysis, eradication of HIV/AIDS is an achievable goal that could be attained by 2050 if the current HIV/AIDS research paradigm is changed from focus on finding a cure to stopping transmission.
Per Dr. Williams such an effort would require testing billions of people annually. Though costly, the benefits would exceed the costs “from day one” according to the South African epidemiologist. Anyone found with HIV antibodies would immediately be administered antiretroviral therapy (which reduces HIV concentration 10,000-fold and infectiousness 25-fold) to halt transmission, effectively ending such transmission by 2015 and eliminating the disease by 2050 as most carriers die out, according to his estimate. The reason for this optimism, per Steve Connor, Aids: is the end in sight? (The Independent, 22 February 2010), is a “study published in 2008 [that] showed it is theoretically possible to cut new HIV cases by 95%, from a prevalence of 20 per 1,000 to 1 per 1,000, within 10 years of implementing a programme [sic] of universal testing and prescription of [HA]ART drugs.”
Even though clinical trials to test Dr. Williams’ vision will start in 2010 in Somkhele, South Africa, access to HAART still needs to be improved greatly to purge the disease. Presently only about 42% of HIV-positive people have access to HAART.
Furthermore, for eradication efforts to succeed, prevention programs (which currently reach fewer than 1 in 5 in sub-Saharan Africa, the epicenter of the pandemic where the average life-expectancy has fallen below 40 leaving about 15 million children orphaned) will have to continue to play an essential role in stopping transmission. Such programs though not limited to, must include abstinence, condom distribution, education re: transmission, safe sex, etc., and needle distribution to drug users (the latter which is badly lacking according to Kate Kelland, Failure to aid drug users drives HIV spread: study (Reuters, 1 March 2010) with “more than 90% of the world’s 16 million injecting drug users offered no help to avoid contracting AIDS” despite the fact that such users often share needles and approximately 18.75% are believed to be HIV-positive).
Proof that such efforts can work is evident when the President’s Emergency Plan for AIDS Relief (PEPFAR) created in 2003 for Africa that provides funding focused on HAART and palliative care for HIV/AIDS patients, HIV/AIDS awareness education and prevention programs (condoms, needle-exchanges, and abstinence) and financial assistance to care for the pandemic’s orphans and other vulnerable children, is considered. Per Michael Smith, PEPFAR Cut AIDS Death Rate in African Nations (Med Page Today, 6 April 2009), the program “averted about 1.1 million deaths [from 2004-2007]… a 10% reduction compared to neighboring African countries.”
The “Disappearing” Victims:
Despite reason for optimism based on Dr. Williams’ vision of eradication, the “disappearance” of HIV/AIDS victims is highly disturbing. In fact, when current statistics are compared to past statistics, more than 19 million victims or triple the number of murdered Holocaust victims (1933-1945) have been purged from the official record (effectively minimizing the severity of the pandemic) without as much as a whimper of protest, possibly because demographically speaking, a statistically-significant number of the deceased fall into groups that have been and continue to be the subjects of racial, gender, cultural, and even religious discrimination. In the words of Charles King, an activist who spoke in San Francisco on World AIDS Day in 2007, it is likely because HIV/AIDS has mainly “taken the lives of people deemed expendable”[2] the same mentality used to justify Hitler’s “Final Solution” and other pogroms.
Back on January 25, 2002 in AIDS Death Toll ‘Likely’ to Surpass That of Bubonic Plague, Expert Says in British Medical Journal Special Issue on HIV/AIDS (Kaiser Network), it was written, “AIDS – which has already killed 25 million people worldwide – will overtake the bubonic plague as the ‘world’s worst pandemic’ if the 40 million people currently infected with HIV do not get access to life-prolonging drugs…”
A year earlier, UNAIDS listed the global death toll as 21.8 million with an increase of 3.2 million in 2002. By 2003, based on statistics reported by the World Health Organization (WHO), UNAIDS, and U.S. Census Bureau as tabulated in The Global HIV/AIDS Epidemic: Current & Future Challenges by Jennifer Kates, M.A., M.P.A., Director HIV Policy, Kaiser Family Foundation the global death toll had risen to 28 million by February 2003. Add annual mortality statistics of 3 million (2003), 3.1 million (2004 and 2005), 2.9 million (2006), 2.1 million (2007), and 2 million (2008, the most recent complete year of reporting) per UNAIDS, and an estimated, conservative total of 1.4 million (if another 28% decline as occurred between 2006 and 2007 took place between 2008 and 2009) the global death toll for year-end 2009 would be roughly 45.6 million. Yet, when UNAIDS released its latest report in November 2009 as reported in the Mail & Guardian (South Africa, 24 November 2009) the worldwide death toll through 2008 was listed as “passing 25 million,” approximately 19.2 million below the actual mark.
Per AIDS cases drop due to revised data (MSNBC, 19 November 2007), the “disappearing” victims can be attributed to “a new methodology.” While this may make sense with regard to prevalence since “[p]revious AIDS numbers were largely based on the numbers of infected pregnant women at clinics, as well as projecting the AIDS rates of certain high-risk groups like drug users to the entire population at risk” versus the new methodology that incorporates data from “national household surveys,” it does not with regard to mortality figures which are calculated primarily from national AIDS registries and/or death certificates based on the presence of HIV, T-4 (CD4) cell counts below 200, and death caused by opportunistic AIDS-related infections resulting from such low T-4 (CD4) cell counts.
In retrospect, when viewing the approximate 45.6 million figure, few pandemics have killed more than HIV/AIDS – Smallpox (which had come in waves since 430 BC until the World Health Organization (WHO) certified its eradication in 1979), killed 300-500 million, Black Death/Bubonic Plague killed approximately 75 million from 1340-1771, and Spanish Influenza killed between 40-50 million from 1918-1919.
Optimism for the Future:
Until HIV/AIDS can be certified as eradicated by the WHO, despite the terrible economic toll it has taken, especially on sub-Saharan Africa (due to lost skills, shrinking workforces, rising medical costs) and other developing regions and its devastating toll in human lives and on families, there is reason for optimism.
As of December 2008, per UNAIDS, 33.4 million people are infected with HIV, a 1.2% increase from a year earlier with much of the rise attributed to a declining mortality rate due to a 10-fold increase in availability of HAART since 2004. About 2.7 million persons were newly infected in 2008, 18% and 30% decreases in new HIV infections globally since 2001 and 1996, respectively. In another promising sign, new HIV infections in sub-Saharan Africa, responsible for about 70% of all HIV/AIDS-related deaths in 2008, has fallen by 15% since 2001. At the same time, there were approximately 2 million HIV/AIDS-related deaths in 2008, a 35% reduction from 2004 levels when the global mortality rate peaked.
Presently, the HIV/AIDS pandemic has begun to decline or stabilize in most parts of the world. Declines have been recorded in sub-Saharan Africa and Asia (although the mortality rate is increasing in East Asia) while the pandemic has stabilized in the Caribbean, Latin America, North America and Western and Central Europe. The only part of the world where the HIV/AIDS pandemic is worsening is the Eastern European (especially in Ukraine and Russia) and Central Asian region.
The declines should continue as new methods of prevention and treatment are developed. Based on studies of NLTPs, a new class of treatments focused on genetic therapy to delete the necessary 32 amino acids from CCR5 receptors, elicit perforin and granzyme B production, and develop protease inhibitors to provide immunity to HIV and halt its spread may be developed in the future.
Though still a long way off and potentially very expensive (up to $20,000 per treatment), Drugs.com Med News reported in Gene Therapy Shows Promise Against HIV (19 February 2010) that when researchers removed immune cells from eight HIV-infected persons, modified their genetic code and reinserted them, the “levels of HIV fell below the expected levels in seven of the eight patients [with] signs of the virus disappear[ing] altogether in one” even though HAART treatment was halted. A study by UCLA AIDS Institute researchers, which removed CCR5 receptors by “transplanting a small RNA molecule known as short hairpin RNA (shRNA), which induced RNA interference into human stem cells to inhibit the expression of CCR5 in human immune cells” mimicking those of LTNPs through the use of “a humanized mouse model,” as reported on February 26, 2010 in Medical News Today in Gene-Based Stem Cell Therapy Specifically Removes Cell Receptor That Attracts HIV, showed similar success in that it resulted in a “stable, long-term reduction of CCR5.”
At the same time, as announced in HIV/AIDS drug puzzle cracked (Kate Kelland, Reuters, 1 February 2010), British and U.S. scientists succeeded (after 40,000 unsuccessful attempts) in growing a crystal to decipher the structure of integrase, an enzyme found in HIV and other retroviruses. This will lead to a better understanding how integrase-inhibitor drugs work and perhaps to a more effective generation of treatments that could impede HIV from pasting a copy of its genetic code in the DNA of victims’ T-4(CD4) cells.
Likewise, per Structure of HIV coat may help develop new drugs (Health News, 13 November 2009) scientists from the University of Pittsburgh School of Medicine “unraveled the complex structure” of the capsid coat (viewing its “overall shape and atomic details”) “surrounding HIV” that could enable “scientists to design therapeutic compounds” to block infection.
At the same time, researchers at the University of Texas Medical School may have finally discovered HIV’s vulnerability, per Achilles Heel of HIV Uncovered (Ani, July 2008) – “a tiny stretch of amino acids numbered 421-433 on gp120” that must remain constant to attach to T-4 (CD4) cells. To conceal its weakness and evade an effective immune response, HIV tricks the body into attacking its mutating regions, which change so rapidly, ineffective antibodies are produced until the immune system is overwhelmed. Based on this finding, the researchers have created an abzyme (an antibody with catalytic or helpful enzymatic activity) derived from blood samples taken from HIV-negative people with lupus (a chronic autoimmune disease that can attack any part of the body – skin, joints, and/or organs) and HIV-positive LTNPs, which has proven potent in neutralizing HIV in lab tests, thus offering promise of developing an effective vaccine or microbicide (gel to protect against sexual transmission). Although human clinical trials are to follow, it might not be until 2015 or 2020 before abzymatic treatments are available.
Elsewhere, International AIDS Vaccine Initiative (IAVI) scientists recently isolated two antibodies from a NLTP HIV-positive African patient – PG9 and PG16 (called broadly neutralizing antibodies (BNAbs) that bind to HIV’s viral spike composed of gp120 and gp41 to block the virus from infecting T-4 (CD4) cells. Per Monica Hoyos Flight, A new starting point for HIV vaccine design (Nature Reviews, MacMillan Publishers Limited, November 2009) “PG9 and PG16, when tested against a larger panel of viruses [HIV] neutralized 127 and 116 viruses, respectively” providing additional hopes for developing an effective vaccine and novel treatment regimens that induce the body to produce BNAbs, which currently only the immune system of NLTPs can create.
At the same time, studies of newborn seroreversion and medically induced production of human leukocyte group A (HLA) antigens that coat the surface of T-4 (CD4) cells could also eventually lead to anti-HIV vaccine that could protect billions of people.
In the meantime until such developments bear fruit, HAART (despite its mild side effects such as nausea and headaches in some and serious to life-threatening side effects in others) has proven to be highly effective in containing HIV with, per Gerald Pierone Jr., MD in The End of HIV Drug Development as We Know It? (The Body Pro: The HIV Resource for Health Professionals, 18 February 2010) reporting, “about 80% of patients [receiving HAART] reach an undetectable viral load.” Furthermore, greater access to antiretrovirals, per Drop in HIV infections and deaths (BBC News, 24 November 2009) “has helped cut the death toll from HIV by more than 10%” from 2004-2008 and saved more than 3 million lives based on UNAIDS and WHO statistics. HAART has also cut the age-adjusted mortality rate by more than 70% according to Kaiser Family Foundation’s July 2007 HIV/AIDS Policy Fact Sheet, because of its effectiveness in delaying and even preventing the onset of AIDS.
Despite HAART’s cost ($10,000-$15,000 per patient per year), the State of California in a report titled, HIV/AIDS in California, 1981-2008 called it “dramatic and life-saving” especially since early intervention results in greater mean T-4 (CD4) cell counts translating into fewer opportunistic infections and deaths. It also results in real cost savings because of the strong inverse relationship between T-4 (CD4) cell counts and associated medical expenses.
In conclusion, despite HIV/AIDS’ “disappearing” victims, there is reason for optimism. Research over the last year has offered several promising leads – the underlying cause of NLTPs’ immunity has been discovered, the structure of the HIV virus solved, and its weak point found – while improved access to HAART and HIV/AIDS education and prevention measures (with the exception of addressing intravenous drug users) have made significant inroads in reducing infection and mortality rates buying victims additional years and an enhanced quality of life.
______
[1] Orapun Metadilogkul, Vichai Jirathitikal, and Aldar S. Bourinbalar. Serodeconversion of HIV Antibody-Positive AIDS Patients Following Treatment with V-1 Immunitor. Journal of Biomedicine and Biotechnology. 7 September 2008.
[2] Michael Crawford. AIDS: Where is Our Rage? The Bilerico Project. 2 December 2007. 28 February 2010. http://www.bilerico.com/2007/12/aids_where_is_our_rage.php
Additional Source:
Wikipedia. 24-28 February 2010. http://en.wikipedia.org/
High Frequency Trading: Sneak Peek and Cut the Line
Latent arbitrage, electronic stock leverage and high frequency trading are the financial jargon that has been discussed regularly over the past month. People argue that the US stock market is fixed; from high-frequency traders, investment banks and private stock exchanges. But what does it all mean?
Public and private exchanges contain high-performance computers that are programmed to trade financial instruments at the speed of light. Each computer trades large swaths of stock in fractions of a second, while simultaneously receiving information about the same stock milliseconds before ordinary investors receive the data. High-frequency trading firms only collect data milliseconds in advance, so what’s the problem?
The concept of latent arbitrage is surrounded by the idea that people receive market data at different times; the time difference is minimal. Latency arbitrage occurs when high-frequency trading algorithms enter trades fractions of a second before a competing trader and then pass on the stock moments later for a small profit. Although the profits per trade are small, the aggregate revenue from HFT is a significant portion of the wealth traded in the United States stock market. In essence, latent arbitrage is the main problem of HFT – algorithmic trading, specifically using sophisticated technological tools and computer algorithms to quickly trade securities.
Today, we find private exchanges paying large sums of money to lay high-speed fiber optic cables from trading venues directly to their servers, taking milliseconds off the time they receive market data.
Here’s an illustration of how high-frequency trading firms use the time frames of multiple stocks in one trade: You buy 20 shares of Bank of America for $17.80147. You place the order through your online intermediary. The brokerage firm buys 5 shares from an investor in Chicago, 5 from a firm in Los Angeles, and 10 from one in Denver. The brokerage then sends your order via high-speed fiber optic cables to countries in Denver, Chicago, and Los Angeles. As soon as your order reaches Denver, the firms that have wires going directly to that exchange will see your future order, and within 4 milliseconds, when you buy 10 shares from Denver and 5 shares from Chicago, the fast trading firms sell the shares of Bank of America for you at $17.80689 and even higher by the time your order reaches Los Angeles. Firms use various manipulations, such as those on a large scale to investors and businesses across the country.
Companies such as the Royal Bank of Canada have developed software that distributes trades to allow each party involved to receive the information immediately. This means (in the context above) that your order to buy Bank of America will reach Chicago, Denver, and Los Angeles at the same time, leaving not a nanosecond for high-frequency traders to execute your order. Other trading firms such as Fidelity have installed 80km coils of fiber optic cables between them and other traders. The coil serves to slow down transactions entering and exiting the firm. When high-frequency traders submit their trades to Fidelity, their data travels over 50 miles of fiber optic cables and reaches the trader at the same time as all other trades.
Essentially, companies that have the financial means to jump to the front of the queue to trade do so. These firms are ambivalent about what they trade; they trade because they know they have a guaranteed profit. High frequency traders don’t play the market, they play the players. Since its inception, HFT has been the domain of mathematicians and physicists. The simple idea that physicists have their own niche in stock market trading should raise eyebrows. These traders do not actually invest capital; they collect what is essentially a tax on each share of equity that is traded. Unfortunately, it’s legal… and the interesting thing is that the big banks aren’t up for it. Simply put, all they have to do is put themselves on the same level as high-frequency traders, which will involve either trading algorithms that move every trade, or coils of high-speed fiber optic cables that physically struggle with the speed at which all parties receive data.
After all, the latent arbitrage form of high-frequency trading is legal, but it is certainly no longer victimless. All investors who do not have the same trading facilities as high-frequency traders are forced to pay a marginally higher price. For one thing, HFT firms do pay large sums to do so – lending credence to the notion that it is the prerogative of each firm. Additionally, arbitrage has been a concept used by traders since the inception of the New York Stock Exchange. On the other hand, investing in the market is a major aspect of our economy and the stock market plays a major role in the growth of industries accordingly. Investing in the stock market is one of the few truly profitable financial activities for the individual (without the inevitable application of capital gains tax). Complexities like HFT in the market hold back the exchange that is governed by the invisible hand in which the platform of our economy exists. I believe that once deterrence beyond taxation is allowed, the total participation [in the market] decreases. All investors must trade on the same level – investment evaluation does not include security analysis, quantitative and qualitative analysis and high-speed fiber optic location. As soon as algorithmic trading is no longer one-sided (such as merger arbitrage), it must be regulated by an appropriate government agency. Ironically, the way to preserve the vestiges of laissez-faire economics is to use the considerable powers of legal action by promoting regulation.
As of April 13, the Securities and Exchange Commission is preparing to delist a number of high-frequency trading firms. In addition, the SEC is seeking to use a campaign of new rules and trading practices that would limit latent arbitrage.
Finally, some food for thought – the practice of high frequency trading [at its current level] was created by Bernie Madoff.
Treatment of tonsil stones with colloidal silver
Colloidal silver is now advertised as a health product (alternative) or preventative product against various diseases. The product is believed to improve the body’s immune system by offering various antibacterial benefits. Colloidal silver is nothing but tiny silver particles (nano-sized particles) found in distilled water in a suspended form. It is a type of bioactive product that in sufficient concentration is able to regulate and destroy harmful viruses and bacteria that cause diseases such as those that can even lead to serious conditions at a later stage such as leukemia, AIDS, tonsil stones and cancer .
Colloidal silver can be used to prevent a variety of ailments other than disease, it can protect the skin from infection due to scratches, burns and wounds. As they have good antibacterial properties, they can also be used to prevent various health conditions such as VRSA, ear infections, viral infections, stomach infections and food poisoning. Products containing colloidal silver are also used to purify water, treat infections, preserve certain drinks or beverages, and limit serious types of infections in the body.
But before using colloidal silver for various problems like disease, various factors related to them should be considered. It is very vital to look for good quality colloidal silver products and this is also preferably obtained from professional health product stores. They should consist of pure silver and distilled water. The silver particles must be very small or tiny and must be mixed with water to ensure excellent results. The colloidal silver particles must also be very pure so that they do not contain additional components or other protein supplements that are unnecessary to the body. This means that colloidal silver used to prevent and treat diseases and other ailments must not have any stabilizers, chemicals and fragrances in them.
The taste of the colloidal silver product should be very similar to normal water, but it may have a slight metallic taste because it consists of small nano particles in smaller proportions. The product must be properly stable without the need for refrigeration, even when not intended for extended use. If the label on colloidal silver used for the treatment or prevention of tonsil stones says “Store in the refrigerator,” it would be a low-quality product. Since it consists of distilled or deionized water, silver acts as an antibacterial product and will not deteriorate even with long storage.
Home remedies for tonsil stones
Not only can a tonsillectomy lead to various health problems later on, it is not cheap. The surgery may also interfere with daily activities for a while. Therefore, it is mostly avoided. In fact, there are natural and scientifically proven ways to get rid of tonsil stones so they never come back. There is absolutely no need to undergo long, drawn-out surgery or waste your money on expensive nasal sprays and tablets. Follow a step-by-step program that will show you exactly how to get rid of tonsil stones naturally and make sure they never come back! You can learn more about the program that promises natural treatment of tonsil stones from [http://tonsilstones1.com]
The seven best beers of 2011
The Rate Beer Best Awards is one of the biggest beer competitions in the world. Breweries from around the world compete for a spot in one of the many categories set for the various brews. From pale lagers to English-style bitters, from stouts to sour ales, the awards have a category for just about everything! At the end of the past year, the awards were distributed and the best beers for 2011 were announced!
As the prizes are very extensive, not all drinks can be listed. Instead, these are the #1 beers in some of the most popular categories:
1) Ayinger Celebrator Doppelbock – It’s no surprise that one of the number one breweries in the world comes from Germany. This creamy, rich and slightly smoky brew is a strong lager that everyone is sure to love!
2) Cigar City Cubano Style Espresso Brown Ale – Only the US could come up with a “new fashion” beer like this and make sure it comes out on top. The ale is matured on whole beans from coffee grinders, especially Naviera coffee grinders, which gives it a very distinct taste.
3) Fantome Saison – Fruity beers have been quite popular this past year and this Belgian brew will probably be the best of them all. Fizzy, citrusy and tart, yet with a freshness that only fruit can bring, this was definitely a favorite!
4) Narke Kaggen Stormaktsporter – This stout is blended with heather honey to add a unique and delicious twist to the Swedish blend. Number one in its category, reminiscent of mead, but not quite.
5) New Glarus Two Women Lager – Brewed in a Bavarian style with Bohemian malts and German hops, but still distinctly American, this pale lager topped all the lists!
6) New Glarus Wisconsin Belgian Red – Another fruit beer, but this time from the US, this special beer is brewed with roasted barley, cherries and wheat. A thick drink, but relatively mild, is great for those who appreciate fruity nuances.
7) Russian River Pliny the Younger – Proving that the US can easily take center stage at these races, the Pliny the Younger has been hopped four different times. Put in a unique bottle and this wonderful drink gives you a fantastic drinking experience!
The past year has certainly been a great one for the beer drinking world, but it reminds most that there is much to look forward to in 2012. Brewers are predicting that prices will rise due to the rising cost of barley. We’ll also see more canned drinks as bottles take a back seat for a while. On the non-production side of things, experts predict that 2012 beers will be less sour and have less wood and/or fruit tones. Craft beers will increase to such an extent that they will become one of the main drinks of the year, and nano-breweries, which focus on small production but high quality, will pop up everywhere. The way things are going, it’s very likely that the best beers of 2012 will find a few newcomers to the scene.
Increase testosterone levels with elliptical training
Testosterone is largely considered the hormone of youth, keeping your muscles and bones healthy, maintaining healthy fertility and sex drive, and maintaining overall energy. Healthy testosterone levels also help the human body resist the accumulation of excess fat. Exercise is one way to increase your testosterone naturally and safely. However, not all exercise routines are as effective as others. So the question is, can working out on an elliptical trainer effectively increase your body’s testosterone production?
Testosterone is an androgen secreted primarily by the testicles of men and the ovaries of women. It is best known for its effects on increasing lean muscle mass, reducing body fat and slowing the aging process. It is also important for sexual desire.
A normal level of testosterone in the bloodstream is between 350 and 1,000 nanograms per deciliter (ng/dl). After age 40, you start losing this hormone at an average rate of about 1% per year. Also, when your body weight approaches 30% above a normal healthy level, your estrogen levels will increase, which can lower your testosterone. So weight control is another important factor when trying to maintain healthy levels of this hormone.
Exercise is one way to increase testosterone as well as decrease body fat, making it less likely that excess weight will cause testosterone to decrease. Exercise stimulates the pituitary gland and testicles, which directly affects testosterone production. However, it is important to choose the right type of exercise if you want to increase your testosterone levels. It may surprise you that excessive training can actually decrease this hormone because it doesn’t allow enough time for recovery and repair and tissue damage occurs. Studies show that testosterone will increase with exercise for the first 45-60 minutes, after which cortisol levels rise, causing this important hormone to decrease.
Experts say the best exercise to boost testosterone should involve large muscle groups at the same time. The largest muscles of the body are in the legs, buttocks and back. This is why training on an elliptical trainer is such an effective way to increase your testosterone levels. Elliptical trainers are an excellent fitness machine for targeting the leg and butt muscle groups. If you have an elliptical trainer with moving handlebars, you will also work your back muscles. Remember that your workout should last 45-60 minutes, but no more to avoid depleting your testosterone from excessive exercise.
So get on your new or refurbished ellipticals that help you age healthily by maintaining healthy testosterone levels.
Renewable Energy Potential and Disinformation
Are you confused about our energy crisis? It’s no wonder, given the amount of disinformation that is being pedaled by Republicans and those with a vested interest in oil, coal and nuclear energy. What they want you to believe is that solar and wind cannot replace our current energy sources. John McCain repeated these lies in his recent debate with Barack Obama. Their calls of drill baby drill are absurd and misleading. For example, the amount of oil reserves estimated to exist off California’s coast are 10 billion barrels. The U.S. consumes about 7.5 billion barrels per year. So what they are advocating is risking the long term health of the coastal ecosystem, in exchange for about 16 months worth of oil.
Republicans have been taking Senator Pelosi to task for not bringing up a vote, on offshore drilling. Meanwhile, Republicans have voted against renewing the tax credits for solar and wind eight times this year. Talk about shortsightedness! As T. Boone Pickens says, whether we drill or not, “this argument misses the point.” It’s a band-aid at best. The U.S. only has 3% of the world’s oil supply. We consume 25% of the supply.
What is needed is long term energy solutions. Here is what they don’t want you to know. Using less than 1% of our southwest desert lands, solar power plants could power the whole country. This is an area 92 miles by 92 miles, an area which is less than the land now used for coal mining. The January 08 issue of Scientific American featured an article called “A Solar Grand Plan”, a proposal, (which you can read online) to do just that. Their proposal would create a 69% solar powered grid by 2050.
You can read it online at Scientific American website
It proposes building solar thermal and concentrating photovoltaic power plants, in our southwestern deserts, and a network of high voltage DC transmission lines to distribute the power to other parts of the country. This HVDC distribution system is the same thing that T Boone Pickens is recommending to move wind generated power from Texas, and from windfarms in the midwest, to the rest of the country. This will have the added benefit of beefing up the grid, something that is needed anyway.
Current thinking is that solar thermal should be emphasized more than the concentrating photovoltaic plants that the SciAm article emphasizes.
There is no shortage of good ideas out there. At setamericafree.org, you will find another plan called “A Blueprint for U.S. Energy Security”.
This plan shows how we can achieve energy security and meet the goals of reducing the threat of global warming, using current technology to get started. As we build, the technology will improve and the costs will improve.
One thing this plan calls for is plug in hybrid cars, (PHEV) which would achieve an overall 100 mpg for the average driver. Most people drive less than 40 miles a day, cummuting etc. With current battery technology you would use no gasoline for the first 40 miles in a PHEV. Most people would recharge at night when demand is low by plugging into a 120 volt outlet, using about $1 worth of electricity to recharge. As the grid gets cleaner, the environmental benefits will improve. Plug in Partners has good information on PHEVs, including cost benefits.
from their site:
“A motorist driving 9,000 annual gasoline-free miles and 3,000 using gasoline would get 100 mpg (based on vehicles that get 25 mpg).
PHEVs outfitted with a battery pack providing a 40-mile electric range could power, using the all-electric mode, more than 60% of the total annual miles traveled by the average American driver.
A 2004 study by the Electric Power Research Institute (EPRI) found that plug-in hybrids can achieve life cycle costs parity with conventional gasoline vehicles – meaning that over the life of the car the cost will be equal or less despite the initial higher cost. The study calculated gasoline price as $1.75/gallon.”
Once the grid is clean energy, it can power much of our transportation as well. At that point, electric cars will make perfect sense and we will have had more time, to perfect the technology. If you study these two plans, you will see that they have much in common. By combining the best ideas of these and other similar plans, we can get the job done.
Another energy plan that also has much in common with these are at:
repoweramerica.org/
Those in power want you to believe that these solutions will be too expensive. Nothing could be further from the truth. For example, the solar proposal published by SciAm calls for spending about $400 billion in public money, over a period of about 40 years. This is less public money, than we spent to build the high speed information highway over the last 35 years. And that is about how much we give to oil companies, in the form of tax credits and subsidies, every five years. So by spending about 1/8 of what we now give away to oil companies, we could power the entire nation with solar energy in the southwest.
As further proof that we are misinformed, most Americans probably haven’t even heard of solar thermal energy. Solar thermal power plants use the heat from the sun to generate electricity, usually by boiling water to drive a steam turbine generator. This is so low tech that we could have done it 100 years ago. If you can build parabolic mirrors or Fresnel lenses to concentrate sunlight, and if you can build a steam driven electric generator, you can build a solar thermal power plant. In fact some designs use flat mirrors. Solar thermal plants can generate electricity at night or during cloudy periods by storing heat. One method uses molten salts, which are excellent at retaining heat. Their power output can remain steady when clouds pass by. The scale of these plants is in the hundreds of megawatts. Two plants proposed for the Mojave Desert are for up to 800 and 900 megawatts each.
One gigawatt equals 1000 megawatts. One gigawatt would power San Francisco or about 770,000 homes.
An excellent article on solar thermal and it’s benefits is at:
salon.com/news/feature/2008/04/14/solar_electric_thermal/index.html
“The key attribute of CSP is that it generates primary energy in the form of heat, which can be stored 20 to 100 times more cheaply than electricity — and with far greater efficiency”
“I don’t believe any set of technologies will be more important to the climate fight than concentrated solar power (CSP)…..It is the best source of clean energy to replace coal and sustain economic development. I bet that it will deliver more power every year this century than coal with carbon capture and storage – for much less money and with far less environmental damage.”
The sunlight can be intensified 1000 fold with concentrating solar.
They do need intense sunlight to be cost effective, hence the emphasis on the southwest. With 1% of the Sahara Desert, you could power the whole world with current technology. 3% of Morroco would power all of Europe. Green Wombat’s website has many articles on solar power plants being built or on the drawing boards in California and Arizona. The three power companies in California have already signed on for about 3 gigawatts of solar power plants. About 2 gigawatts of this is solar thermal. It’s just the beginning.
Concentrating PV or photovoltaic plants use similar parabolic mirrors, fresnel lenses etc. to concentrate sunlight on photovoltaic solar cells or panels. Specialized solar cells that can take advantage of the increased light are used.
“I’d put my money on the sun & solar energy. What a source of power! I hope we don’t have to wait until oil and coal run out before we tackle that.”
Thomas Edison, 1931
Republicans keep pushing nuclear energy, claiming it is a simple solution and good for the environment. I don’t rule out nuclear power altogether, but it has numerous problems, and is not as green as it’s promoters claim.
One of nuclear’s biggest problems is water. It takes billions of gallons to cool a single reactor. We are already seeing one potential problems with this. A reactor in Alabama had to be briefly shut down last summer during a drought in that region. How reliable will the sources of cooling water be in a changing climate?
“An Associated Press analysis of the nation’s 104 nuclear reactors found that 24 are in areas experiencing the most severe levels of drought. All but two are built on the shores of lakes and rivers and rely on submerged intake pipes to draw billions of gallons of water for use in cooling and condensing steam after it has turned the plants’ turbines.”
Every nuclear power plant will require about $500 million to dismantle it, when it has outlived it’s useful life. This adds to the nuclear waste disposal problem.
Every nuclear reactor represents about $200 million for it’s share of Yucca Mt. in Nevada, to dispose of the waste.
Nuclear power doesn’t give us energy independence. We import 65% of our oil and 90% of our uranium. And now Russia is being lined up as a future source of 20% of our uranium.
“The United States and Russia signed a deal that will boost Russian uranium imports to supply the U.S. nuclear industry, the Commerce Department said Friday….”
“The new agreement permits Russia to supply 20 percent of US reactor fuel until 2020 and to supply the fuel for new reactors quota-free.”
“So if, under a President McCain, we build a bunch of new nuclear reactors — they could be fueled 100 percent by Russia.”
“I can almost hear Vladimir Vladimirovich Putin saying, “Excellent.”
gristmill.grist.org/story/2008/3/20/14125/7761
Nuclear power is not safe. According to Argonne National Laboratory, an airliner crashing into a nuclear power plant could cause a complete meltdown, even if the containment building isn’t compromised. Think the twin towers disaster was bad?
The more nuclear reactors are build all over the world, the more fissionable material there will be, which can be stolen by terrorists and used against us. Just look at the concern over Iran’s nuclear program. How many times may this kind of scenario be played out if nuclear energy proliferates all over the world?
The transportation of radioactive waste from all over the country to Yucca Mt. is potentially dangerous, as well as expensive.
“In the United States, current surcharges on nuclear power are too low to cover expected disposal costs. In addition, the US government foolishly absorbed all risk for an on-time opening of a repository for commercial nuclear waste — despite longstanding technical and political challenges associated with making this happen.” from eoearth.org
There is no accountability with nuclear power. The Price-Anderson Act places most of the liability for nuclear accidents on the backs of taxpayers, not the nuclear power industry.
A nuclear power plant costs about $4,000 per kilowatt hour to build, compared with $1,400 per KWH for wind energy.
Wind and solar are much quicker to get up and running than nuclear or coal. And both can start generating power before large wind or solar farms are completed, because they are modular in design.
Nuclear energy is heavily subsidised, like coal, gas, and oil. Estimates are 4-8 cents per KWH
If you want to know more, read “The Lean Guide to Nuclear Energy” pdf online. It’s a real eye opener.
theleaneconomyconnection.net/downloads.html#Nuclear
from “The Lean Guide to Nuclear Energy” -which takes apart the argument for nuclear energy piece by piece. After reading this you will understand, that what you have been told about nuclear energy thus far, is completely misleading. It is not a long term solution, in any way shape or form. It is inherently unsustainable. Unsustainability is not what we are looking for.
“The world’s endowment of uranium ore is now so depleted that the nuclear industry will never, from its own resources, be able to generate the energy it needs to clear up its own backlog of waste.”
“Shortages of uranium – and the lack of realistic alternatives -leading to interruptions in supply, can be expected to start in the middle years of the decade 2010-2019, and to deepen thereafter.”
“Every stage in the nuclear process, except fission, produces carbon dioxide. As the richest ores are used up, emissions will rise.
“It is reasonable to conclude that,even if the nuclear industry presented no other problems, “peak uranium” would rule out the prospect of the nuclear industry being in any way an answer to “peak oil”, and to scarcities of gas and coal.”
“Nuclear energy certainly has disadvantages, quite apart from the clincher problem of the depletion of its fuel. It is a source of low-level radiation which may be more dangerous than was previously thought. It is a source of high-level waste which has to be sequestered. Every stage in the process produces lethal waste, including the mining and leaching processes, the milling, the enrichment and the decommissioning. It is very expensive. It is a terrorist target and its enrichment processes are stepping stones to the production of nuclear weapons.”
Wind and solar can provide most of the power for our future energy needs. They never need any fuel, to prospect for, mine, transport, refine, store, burn, fight wars over, or clean up the mess from. It’s our future. Oil and other fossil fuels will only go up in price. The price of solar is falling fast and will soon be cheaper than fossil fuels. The American Wind Energy Association forecasts that installed capacity could grow from 11,603 MW today to around 100,000 MW by 2020. That’s 100 gigawatts, or a nearly 90 gigawatt increase. Hoover Dam produces about 2 gigawatts, as does a medium size nuclear reactor. Many nuclear plants are one gigawatt. So in the next twelve years we could get as much power from new wind farms as McCain’s plan for 45 new nuclear plants would achieve, at less cost and way less risk. And that’s just wind!
Solar can do more. Add photovoltaic panels on rooftops etc. all over the country to the solar plants in the southwest and you have both distributed and centralized solar energy on a vast scale. Denmark already has 20% wind power. Parts of Germany and Denmark have 40% wind power. We are told that wind and solar are too intermittent. Why isn’t that a problem in Denmark. Could it be because they have no oil company lobby?
That’s why we should start building up this new energy infrastructure now. As we build, the costs will fall. Photovoltaics are becoming more efficient and cheaper to make. Economies of scale will kick in as these industries grow, further reducing prices.
One company on the cutting edge, Nanosolar, says their thin film PV solar systems can be built for less than the cost of a comparable coal fired plant, without the need for any coal or any other fuel. They are promoting their solar systems as solutions for individual towns. They say ten acres on the outskirts of town would power 1,000 homes, twenty acres- 2,000 homes.
In many parts of the country solar prices are already competitive, during hours of peak demand, when rates are higher. This is particularly so in sunny areas that also have high electricity prices. Also, solar plants put out energy when it is most needed and when prices are the highest. At those peak prices, solar is already competitive.
We can’t afford to wait. Oil is ruining our economy and our environment. SetAmericaFree estimates the annual hidden costs of oil, including the subsidies mentioned above, at over $800 billion. If these costs were reflected in prices at the pump, gasoline would be close to $12 a gallon. Their estimate of oil and gas company tax credits and subsidies is over $80 billion annually. The mililtary costs of protecting oil shipments are estimated at $100 billion annually. And oil adds $700 billion annually to our trade deficit, mostly with nations we don’t get along with. Throw in the costs of the two wars in Iraq in both lives and money and oil starts to look pretty expensive.
McCain wants to give $4 billion more in tax credits to oil companies. Exxon/Mobile made $40 billion in profits last year, and the top five companies made a combined $123 billion. We are subsidizing the past, when we should be subsidizing the future.
Our lack of political will to develop renewable energy in the U.S. threatens to put us in a position, of playing catch up with other producers.
Green Wombat comments of Abu Dhabi solar project and Torresol ambitions in U.S. southwest.
“Abu Dhabi is not content to just sell you the oil that fuels your SUV; now its going to sell you sunshine to keep your lights on and power your electric car when the internal combustion engine goes the way of the buggy whip. Masdar, the oil-rich emirate’s $15 billion renewable energy venture, and Spanish technology company Sener on Wednesday announced a joint venture called Torresol Energy to build large-scale solar power plants in Australia, Europe, the Middle East, North Africa and the United States.”
(They are targeting the same American southwest, where the authors of the Solar Grand Plan proposal are encouraging America to invest.)
“The irony is too rich to leave unsaid: A leading oil producer invests billions in carbon-free energy while a leading consumer of fossil fuels – the United States – continues to subsidize Big Oil while while offering only tepid support for green technology.”
“It is inevitable that climate change will foster the rise of renewable energy – the only question is which countries and companies will profit from the new energy economics. It is entirely possible that the U.S. will trade energy dependence of one kind – on Middle East oil – for another – on Middle East and European solar technology – in the era of global warming. It’s no coincidence that most of the solar energy companies with contracts to build utility-scale power plants in California and the Southwest have overseas roots – Ausra hails from Australia, BrightSource was founded by American-Israeli pioneer Arnold Goldman, Solel is based in Israel and Abengoa is headquartered in Spain.”
from the proposal in the Scientific American article:
“The greatest obstacle to implementing a renewable U.S. energy system is not technology or money, however. It is the lack of public awareness that solar power is a practical alternative-and one that can fuel transportation as well. Forward-looking thinkers should try to inspire U.S. citizens, and their political and scientific leaders, about solar power’s incredible potential. Once Americans realize that potential, we believe the desire for energy self-sufficiency and the need to reduce carbon dioxide emissions will prompt them to adopt a national solar plan”.
Precursor Biohackers – Mr. Gregor Mendel and Mr. Robert Koch
There will certainly be those who will challenge the idea that these two geniuses and current giants of science are considered ancient biohackers. However, when one studies the techniques, methods, concepts, logic and methodologies of doing modern biohacking, it is not very different from the original operation of these giants of science. Therefore, we can at least say that these are concrete models of the biohacking movement. Do-it-yourself and hand-crafted techniques are common in many segments, including biology, where many researchers and amateurs have made accidental discoveries through logical research methods.
Biohacking is the practice that merges conceptual biology with the hacking movement. Although considered an amateur movement, the same, as in computer technology, is on the way to professionalization and even commercial use and standardization of methods, such as the creation of Hello World for the biosciences and the biohacking movement, as a method for beginners. When someone thinks that the biohacking movement is amateurish, it should be remembered that the great computer hackers for the most part did not even have a university degree, and many of them came close to bringing down large corporations with all their professionalism. The world is changing and the concepts of what professionalism really is need to be rethought. During World War II, the elite force called the SS was considered an amateur group by the Allies, including historians specializing in tactics and military operations. When we observe the reason, we realize that they were right to think of them as amateurs who do not have the rigorous training, logistics of heavy weapons, strategic tactics of mass destruction that the army possesses. With this comparison we can understand that it is not equivalent to call the biohacking movement amateur, although in some cases it really is, compared to computer hackers, although they are classified as amateurs, the damage and at the same time the technological progress that brought is a gigantic We can, in this case, consider that the biohacking movement is in this environment.
“We are at a point where we need to know more about the fearless.” Marie Curie.
The reality is that the biohacking movement is expanding its scope and capabilities. From Mendel with the pea, the discovery of genes, genetic sequencing, to this day there has been great progress, but still inconsistent with the needs, such complexity in the biological sciences. And the biohacking movement is emerging as a third way, far beyond the global brainstorming.
However, it is important to understand that in the beginning it is important to think about the alternative and cheap biohacking tools and at the same time understand that this is not the focus of the biohacking movement. Consider this detail carefully. Conform was discovered by Edsger Dijkstra: “computer science is no more about computers than astronomy is about telescopes”.
The movement of scientific progress, new tools and new means and methods to generate science and technology at lower costs is underway. Who would even imagine someone doing genetic sequencing at their home desk and even analytical software to study the results? This non-institutional movement, like the third time, also happened in the progress brought by hackers in computers! The biohacking movement is integral to the concept of transhumanism. The purpose of this article is not to list the possibilities of biohacking. We are talking about new techniques, tools and access to high-level information in an open mode, for more access. In short, more people and more brains are working and thinking in pursuit of the same and innovative goals and discoveries.
Popularizing and new alternative methods for complex science is the strength of the biohacking movement. This will be done by spreading techniques, reducing the costs of alternative equipment, materials and raw materials, with methods of easy and moderate application. Including methods like CRISPR, making artificial bioreactors, promoting PCR, using tools like centrifuges to separate components from blood or DNA material, using PDMS and working with tissues, cells, stem cells, serum and chemicals. Also, other diverse methods and techniques with which the biohacking movement will evolve.
Parallel to the biohacking movement, there are several advances and new concepts paralleling the biohacking movement. We can think that the expensive high-resolution electron and atomic microscopy now has an optical version that led to a Nobel Prize. It will not be surprising that this equipment will soon have its price and promotion throughout the country. What would Robert Koch be without the microscope?
A microscope, or rather a nanoscope, capable of making the micro/nano world accessible to everyone. By combining this power with computing, we will have widespread access across the earth to something that until then was limited to only large laboratories. The biohacking movement is not amateurish! Rest assured, things get messed up and old concepts will soon be revisited.
In addition to nanotechnology and various other terms, transhumanism refers to the forced evolution of the human body using science and technology, with biohacking techniques, nanotechnology for better handling, material control, and matter organization. By forced evolution you mean making people see in the dark, sense the earth’s magnetism, sharpen their noses to incredible levels, etc. Some consider transhumanism to be posthumanism, human bodies after biology, an artificial being with high technological advances. As you can see, the progress is exponential, this is where we are now. As an example, there is a growing movement to generate the first global version of the biohacking method. It is not so current, so biohacking techniques and methods are already successful. There are two giants of science who started their work as true biohackers, namely: Gregor Johann Mendel and Robert Koch.
Gregor Johann Mendel: His works and curiosity in the garden led him inevitably to use biohacking techniques to understand botanical biological functioning. Many experts say their work was critical to the advancement of today’s genetics.
Robert Koch – Today he is known as the greatest bacteriologist who ever traveled the world. The forerunner of in vitro studies by Mr. Petrie. He was a farmer and doctor with the constraints of time, but his peak was made possible by his performances as a biohacker, running experiments on his property. About to become the first expert on anthrax spores. However, his name today belongs to the greats of science because of his persistence and professionalism in the field.
The purest science is based on observation, deduction and analysis. The Austrian monk Gregor Johann Mendel always observed plants during his free time in the monastery, in the Order of St. Augustine, about 1844. His position was overseeing the monastery gardens in Bohemia. As a professor of natural sciences specializing in the study of the interbreeding of plant and animal species, his great achievement came from his analytical observations with peas. Analyze the results mathematically. He is now considered the father of genetics, through his precise observations of color changes, variation, the mechanism of flowers, and soon the mechanism of heredity we know today as genes. His masterpiece deals with; experiments with hybrid plants and artificial fertilization. It was practically the forerunner of the treatise on the laws of heredity, now known as Mendel’s laws, which govern the transmission of hereditary characteristics, hence genes.
What the greats of science have done in the past, in short reality, is possible to achieve in a much more profound way, at reduced cost, in the subject area. This is exactly the path the biohacking movement and other disciplines of advanced science and technology are taking.
3D Printing: The Near Future and Market Opportunities Explored
The 3D printing process was invented by Chuck Hill in 1983, called “stereolithography” as a technique for constructing solid objects by successively printing thin films of ultraviolet material on top of each other. This technique laid the foundation for the current scenario of 3D printing. The modern definition of 3D printing can be defined as an additional engineering process to generate a physical object from a digital source or design. There are various 3D technologies and materials on the market today, but they all follow the same standardized procedure: a solid material from a digital design by adding successive layers. A typical 3D print initiated by forming a digitized design file of an individual. The next step varies depending on the technology and material used, starting with the system printers to melt the material and place it on the print platform. Time is highly dependent on print size and often on post-processing events. Common printing techniques include fused deposition patterning, stereolithography, digital light processing, selective laser sintering, polyjet and multijet patterning, jet bonding, and metal printing (selective laser melting and electron beam melting). Print materials vary with print options ranging from rubber, plastics (polyamide, ABS, PLA and LayWood), ceramics, biomaterials, sandstone, metals and alloys (titanium, aluminium, steel, cobalt chrome and nickel).
The 3D printer is advantageous because it offers the construction of complex designs that cannot be produced by traditional methods, the customization of products without additional parts or tools and without additional pricing, and it creates hope for entrepreneurs or designers in cost-effective production for market testing or other needs. In addition, traditional factory production methods generate a huge amount of waste from raw materials, for example, the production of staples wastes nearly 90% of raw materials. On the other hand, the 3D printing manufacturing process involves minimal material loss and can be recycled in the next cycle.
However, the concept of 3D modeling is often associated with disadvantages such as high cost of large-scale production, limited strength and durability, and lower resolution. In addition, there are more than 500 3D printing materials on the market, most of which are made of plastics and metals. However, thanks to rapid technological advancements, the number of materials is increasing rapidly, including wood, composites, meat, chocolates, etc.
According to public sources, by 2027 one tenth of the world’s production will be 3D printed. Therefore, the price of printers will drop from $18,000 USD to $400 USD over the next 10 years. Because of this, various companies have started their 3D printing production, such as the dominant shoe companies as well as in aircraft structures. The evolving technology will create a scenario where smartphones are enhanced with a scanner allowing everything to be built at home, for example, China created an entire 6-story building using 3D printing technology.
3D printing has diverse applications in the fields of medicine, manufacturing, socio-culture and industry. Based on production applications, the field is divided into flexible tools, food, research, prototyping, cloud-based add-ons, and mass customization. On the basis of medical application, the field is segmented into bioprinting devices and drugs. For example, in August 2015, a 3D-printed surgical bolt device called the “FastForward Bone Tether Plate” was approved by the Food and Drug Administration (FDA) for the treatment of bunions. In addition, in May 2017, the researcher from the Max Planck Institute for Intelligent Systems, Germany, developed a micromachine called microswimmers, using Nanoscribe GmBH’s 3D printer technology, to precisely deliver drugs to the site of infection and can be controlled inside in the body. Various industries have adopted 3D printing technology to manufacture their products. For example, Airbus SAS, France declared that its Airbus A350 XWB product contains more than 100 3D printed components. The aerospace industry has developed a 3D printer through the collaboration of NASA Marshall Space Flight Center (MSFC) and Made In Space, Inc. for printing in zero gravity.
This is a market
The global 3D printing market is projected to reach XX USD by 2022, from XX in 2015 at a CAGR of XX% from 2016 to 2022 according to the latest updated report available on DecisionDatabases.com. The market is segmented on the basis of printer type, material type, material form, software, service, technology, process, vertical, application, and geography.
On the basis of printer type, the market is segmented on the basis of desktop 3D printers and industrial printers. On the basis of material type, the market is segmented as plastics, metals, ceramics, and others (wax, wood, paper, biomaterials). On the basis of material form, the market is segmented on the basis of filament, powder and liquid. On the basis of software, the market is segmented on the basis of design software, proofing software, printer software, and scanning software. On the basis of technology, the market is segmented on the basis of stereolithography, fused deposition modeling, selective laser sintering, direct metal laser sintering, polyjet printing, inkjet printing, electron beam fusing, laser metal deposition, digital light processing, and laminated manufacturing. objects. On the basis of process, the market is segmented on the basis of bond jetting, direct energy deposition, material extrusion, material jetting, powder bed fusion, bath photopolymerization, and sheet lamination. Vertically, the market is segmented on the basis of automotive, healthcare, architecture & construction, consumer products, education, industrial, energy, printed electronics, jewelry, food & culinary, aerospace & defense, and others. On the basis of application, the market is segmented on the basis of prototypes, tools, and functional parts.
By geography, the market is segmented on the basis of North America, Latin America, Europe, Asia Pacific and Middle East & Africa
Factors such as high investment in research and development (R&D), low wastage of raw materials, and ease of custom-built products are driving the growth of the market. However, factor like limited availability of printer, high cost of materials and shortage of skilled professionals are hindering the growth of the market.
Rails Hosting – 10 VPS Providers that FULLY support Ruby on Rails
The simple answer to running Ruby on Rails applications on different hosting services is that if you have access to the underlying operating system, you will be able to run the applications.
The core requirements (well, two core requirements) that are essential for Rails applications and are missing from most “traditional” hosting services include…
- Deployment engine (usually GIT)
- A viable application server that supports Rails (Puma or Passenger)
The first problem can usually be overcome using FTP (not the most efficient solution, but still works).
The second is much more problematic and why most people end up using VPS solutions to deploy Rails applications (VPS servers give you unlimited access to the underlying infrastructure).
VPS servers are basically what “cloud” providers give people access to. Contrary to “traditional” hosts – which literally allocate space to a single server, the new “cloud” infrastructure setup basically divides the load between an entire data center of servers.
This not only keeps costs down, but ensures that the buyer can actually *scale* their computing resources without having to physically pay for a new server. In any case, if you absolutely want to host a “rails” based application on a “cloud” VPS. The only problem with this is that you are responsible for securing the server (which is another story in itself).
Rails Compatible Hosts
To that end, the most important thing to understand is that if you’re looking at this list – ANY VPS server will be able to run a Rails application. You just need to make sure you know how to install the different apps (which I’ll cover in another article). For now, let’s take a look at the most efficient and cost-effective hosts:
-
Digital ocean
The undeniable TSAR of cheap “cloud” VPS providers. Founded in 2011, it was the first to provide VPS infrastructure at a single price for developers. From $5/month you get access to multiple data centers and many different server configurations. The most important thing to understand about DO – as with most other “cloud” VPS hosts – is that spinning up a VPS server literally gives you access to a Linux box running in a data center. You are responsible for setting up everything else (unless – of course – you pay for the precompiled images, etc.). Regardless – this is the most effective “budget” VPS provider for Rails applications.
-
Vultr
A lesser known but still very effective cloud VPS service, Vultr is basically a “mini-me” for DigitalOcean. It has data centers in various locations (from the US to Japan and even Germany and the Netherlands) – allowing for wider coverage. The most important thing to appreciate about Vultr is that it’s basically designed to be the equivalent of DigitalOcean – without any of the extra frills that the former might have. For example, it has no built-in monitoring software (which DigitalOcean includes for free), and Vultr’s big claim to fame came from its $2.50/mo VPS server (which is currently “sold out”). This was very effective for developers who just wanted to push simple applications (either for testing in a staging environment or to keep costs low). You still need to provision servers like you do with DigitalOcean.
-
UpCloud
Advertised as the “fastest” cloud VPS provider, Finland’s UpCloud essentially provides the same services as the first two providers (DigitalOcean + Vultr) – except with a much deeper focus on support. Providing an API along with a myriad of other services, the system provides users with the ability to deploy VPS servers in a number of data centers around the world. Again, the main difference with this is the proportional speed of the servers they run. This is apparently due to their MaxIOPs technology, which basically allows them to store a lot of data in memory (therefore speeding it up). Prices start at $5/month and – yes – you’ll still need to provision the servers yourself.
-
ExoScale
European Cloud Hosting – Based in Switzerland, they specialize in providing Eurocentric infrastructure. With 4 data centers (2 in Switzerland, 1 in Austria and 1 in Germany), the company has chosen to be extremely specific in its approach to providing infrastructure for various application developers. Although their prices are very competitive, the most important thing to realize about this company is the efficiency they provide. Being Swiss, they benefit from the ingrained culture of efficiency that permeates much of the Swiss community. This means you’ll not only get quick email responses, but also in-depth and well-thought-out responses. They tend to provide services to many banks and financial institutions across Europe. Their niche level targeting allows them to specialize in providing optimal speed, reliability and efficiency of their services to the clients they end up working with.
-
Hetzner (cloud)
Hetzner is a German hosting company with two data centers in the country. While they were founded as “traditional” hosting, meaning they essentially allocated their data center around who paid for servers. Since 2017, the company has started offering a “cloud” service – through which you can provision VPS servers in exactly the same way as DigitalOcean, Vultr and the group of other providers. At comparable prices, the most important element of Hetzner’s business is that it is almost exclusively focused on the German market. That’s not to say they don’t serve international customers – but in terms of the availability of their data centers and how they handle support etc, it’s a completely German operation. Obviously, with prices starting at ~$5/month, they only provide the ability to host servers – the onus is on you to provision them.
-
Linod
Not as famous as DigitalOcean or Vultr, but no less effective – Linode is a favorite of many smaller developers, as it was one of the first to offer cheap “cloud” VPS servers. Linode is efficient, with prices starting at $5/month – it has a number of data centers around the world and is almost on par with more popular “cloud” services. As always – you get no frills with the service. You still need to provision and maintain the servers yourself.
-
Rackspace
The “father” of online hosting, RackSpace has been a major player in the hosting world since its inception in 1998. As you can imagine, they also got into the cloud game very early on. The problem with Rackspace — like Microsoft — is that it’s expensive. Designed primarily for larger organizations, their “cloud” servers start at $50/month – but make up for it with the “fanatic” support the company will provide. This support is actually very good and allows users to really rely on them to keep things as efficient as possible. I would not recommend Rackspace for smaller projects. It’s just not worth the price, especially when you have people like DigitalOcean doing the same thing but for a fraction of the price.
-
Microsoft Azure
Microsoft’s “cloud” VPS offering is probably the most efficient of the big 3 (Google, Amazon, Microsoft). Azure is packed with additional services that help developers run applications across the vast number of Microsoft-owned data centers. Fully supporting Linux and Windows VPS systems, the company is one of the few that provide a deeper insight into how different servers work. They give access to a rich dashboard through which you can track everything from resource usage to how many requests different servers have received. While this sounds good, it is expensive. And it is really designed to help huge organizations adopt the “cloud” – putting it out of reach for most smaller developers. If you are interested in using it, you should definitely look it up first.
-
AWS (EC2)
AWS is good but expensive (especially if you need more computing resources). Hailed as the “original” cloud provider, each EC2 instance you spin up actually acts as an independent VPS. The problem with AWS is that because it’s so broad, it’s hard to know what you actually need with it. Also, like Microsoft Azure and Google Cloud Platform – the sheer scale of the infrastructure at play is huge. To that end, it should come as no surprise to learn that the majority of popular web-based applications (especially those that rely on the likes of S3) rely on EC2 & AWS to run. Because of this, the service is generally seen as supporting larger deployments that require multiple server clusters, DB servers, and CDN management (Amazon actually owns “CloudFlare”). After all, if you want to deploy a large and popular application, AWS infrastructure will surely help you. The pricing isn’t great, but it’s well maintained and backed by Amazon’s massive infrastructure (which it uses for its own operations).
-
Google Cloud Platform
Google’s entry into the “cloud” space, its “cloud platform” is used by companies such as Apple and Twitter. Like Azure & AWS, it is used by larger organizations to streamline their infrastructure requirements. Since Google uses the platform for its own infrastructure, it stands to reason that you should be able to trust the system – and their community is actually very strong and active. The big difference with Google’s platform is pricing. They offer a very competitive price range, which allows a number of different developers to deploy software without incurring huge costs to do so.
The key to all of this – as mentioned – is that you’ll usually need to provision the various servers. This means installing web + application server software, libraries, and any ancillary services (SSL certificates, etc.).
If you’re willing to use a service like Nanobox, Hatchbox, RailsHosting or VPSDeploy – you should be able to avoid the pain of having to set up a valid web host… but ultimately it’s entirely up to you what you do.
To be clear – the beauty of ‘traditional’/’shared’ hosting has yet to catch on in the ‘cloud’ arena. Instead of providing a simple platform for app deployment, you’re pretty much left to your own devices.
Nutrimin C Review
Some of the products in the line are Nutrimin C RE9 RENewing Gelee Hydrating Wash, Restoring Mist, Balancing Toner, Re9Reversing Gelee Transforming Lift, Reactivating Facial Serum (Day and Night formulas), Re9Repair Corrective Eye Creme, Reality SPF 8 Day Creme, Recover Night Cream, Reveal Facial Scrub and Deep Pore Cleansing Mask. Body lotion, foaming body wash, body serum and capsule dietary supplements are also available for purchase.
ingredients
The elements included in the Nutrimin Re9 skin care system include biohydria complex (a blend of seven plant nutrients that soften and support the skin); nanospheres that contain antioxidant vitamins; vitamin C magnesium ascorbyl phosphate, which improves skin elasticity; elhibin and stimu-tex, which improve the texture and appearance of the skin; alpha lipoic acid antioxidant; kojic acid, used to fight skin damage from the sun and the environment; honey, which helps reduce fine lines in the skin, alpha and beta hydroxy acids; and peptides that reduce the appearance of wrinkles.
Advices
Although the Re9 skincare line requires 5-6 steps once you get the hang of it, the whole process shouldn’t take more than 2 minutes to apply. You start with a hydrating wash and then follow up with a mist and toner. The transforming lift should not be used every day, but it can be if necessary. You will then use the Day Face Serum and Reality Day Cream. The night face serum, correcting eye cream and restorative night cream are used before sleep. The face scrub and cleansing mask should not be used every day and never on the same day as the others. They are best used 1-2 times a week immediately after cleansing.
Benefits
Vitamin C has long been used in skin care products to tighten the skin. Vitamin C Magnesium Ascorbyl Phosphate in this product is a biologically active form of Vitamin C. It helps to produce collagen in the skin, which in turn helps to strengthen and restore skin elasticity. This is exactly what you want to see in an anti-aging skin care line.
Since the Re9 skin care line is available through Arbonne, the customer gets the added benefit of receiving personalized attention from your local representative. They can help you figure out which products will work best for you and how to use them properly.
Positive customer reviews
Reviews of Arbonne Nutrimin C are mostly positive. Some people say that their skin really breaks out on the first use, but it’s just cleansing the face of impurities in the skin. The end result is younger and healthier skin. Some even saw their lines and wrinkles disappear within days
When you consider the cost of plastic surgery these days (and the risks), it doesn’t take much to see that anti-aging skin care with proven results is worth the price. For one price, you get 7 products that will help you look younger than ever. Looking younger will improve your self-image and help you feel younger. Try Arbonne Nutrimin C today.