Exploring the Vast Cyber-Space Information Realms of Clearnets and Darknets

Thoughts on Crawling and Understanding the Darknet

Sourced through Scoop.it from: blog.lewman.is

>” […]

Darknets have been around for a decade or so. Some of the most well-known are from the Tor network; Silk Road, Wikileaks, Silk Road 2, StrongBox, and so on. For good or bad, Silk Road is what helped bring darknets to the masses.

The current trend in information security is to try to build insight and intelligence into and from the underground or the darknet. Many companies are focused on the “darknet.” The idea is to learn about what’s below the surface, or near-future attacks or threats, before they affect the normal companies and people of the world. For example, an intelligence agency wants to learn about clandestine operations in its borders, or a financial company wants to learn about attacks on its services and customers before anyone else.

I’m defining the darknet as any services which requires special software to access the service, such as;
1. Tor’s hidden services,
2. I2P,
3. FreeNet, and
4. GnuNet.

There are many more services out there, but in effect they all require special software to access content or services in their own address space.

Most darknet systems are really overlay networks on top of TCP/IP, or UDP/IP. The goal is to create a different addressing system than simply using IP addresses (of either v4 or v6 flavor).XMPP could also be considered an overlay network, but not a darknet, for example. XMPP shouldn’t be considered a darknet because it relies heavily on public IPv4/IPv6 addressing to function. It’s also trivial to learn detailed metadata about conversations from either watching an XMPP stream, or XMPP server.

The vastness of address spaces

Let’s expand on address space. In the “clearnet” we have IP addresses of two flavors, IPv4 and IPv6. Most people are familiar with IPv4, the classic xxx.xxx.xxx.xxx address. IPv6 addresses are long in order to create a vast address space for the world to use, for say, the Internet of Things, or a few trillion devices all online at once. IPv6 is actually fun and fantastic, especially when paired with IPSec, but this is a topic for another post. IPv4 address space is 32-bit large, or roughly 4.3 billion addresses. IPv6 address space is 128-bits large, or trillions on trillions of addresses. There are some quirks to IPv4 which let us use more than 4.3 billion addresses, but the scale of the spaces is what we care about most. IPv6 is vastly larger. Overlay networks are built to create, or use, different properties of an address space. Rather than going to a global governing body and asking for a slice of the space to call your own, an overlay network can let you do that without a central authority, in general.

Defining darknets

There are other definitions or nomenclature for darknets, such as the deep web:

noun 1. the portion of the Internet that is hidden from conventional search engines, as by encryption; the aggregate of unindexed websites: private databases and other unlinked content on the deep web.

Basically, the content you won’t find on Google, Bing, or Yahoo no matter how advanced your search prowess.

How big is the darknet?

No one knows how large is the darknet. By definition, it’s not easy to find services or content. However, there are a number of people working to figure out the scope, size, and to further classify content found on it. There are a few amateur sites trying to index various darknets; such as Ahmia, and others only reachable with darknet software. There are some researchers working on the topic as well, see Dr. Owen’s video presentation, Tor: Hidden Services and Deanonymisation. A public example is DARPA MEMEX. Their open catalog of tools is a fine starting point. […]”<

See on Scoop.itSocial Media, Crypto-Currency, Security & Finance

Advertisements

Data Scientists: Explore Game Theory to Boost Customer Engagement | The Big Data Hub

See on Scoop.itTwitter & Social Media

Data scientists may consider themselves fish out of water when it comes to applying game-theoretic approaches to customer engagement. Nevertheless, it provides a valuable set of approaches for behavioral analytics.

Duane Tilden‘s insight:

>Customer engagement is a bit of a game, because, deep down, it’s a form of haggling and bargaining. Let’s be blunt: everybody has an ulterior purpose and is manipulating the other party in that direction. The customer is trying to get the best deal from you, and you’re trying to hold onto them and sell them more stuff at a healthy profit.

Customer engagement is not solitaire, and, unlike many online games, it always has very real stakes. By its very nature, customer engagement is an interactive decision process involving individuals and organizations, entailing varying degrees of cooperation and conflict in the course (hopefully) of a stable and mutually beneficial outcome.

Game theory is a modeling discipline that focuses on strategic decision-making scenarios. It leverages a substantial body of applied mathematics and has been used successfully in many disciplines, including economics, politics, management and biology. There has even been some recent discussion of its possible application in modeling customer-engagement scenarios to improve loyalty, upsell and the like.

Customer engagement modeling is a largely unexplored frontier for game theory. The literature on this is relatively sparse right now, compared to other domains where game theory’s principles have been applied. […]<

See on www.ibmbigdatahub.com

Proliferation of wireless devices and networks detrimental to environment

See on Scoop.itGreen & Sustainable News

Cloud computing should be driving sustainable development, but its turning us into energy consuming monsters, write Stuart Newstead and Howard Williams

Duane Tilden‘s insight:

>There is a familiarity and comfort in our almost-everywhere connection to always-on communications networks and to the ever-increasing array of services they deliver us. We don’t just consume these network services directly, they give us what economists call “options” – options to connect, options to seek out new services, options to find new information. Clearly we don’t use this network services 24/7, but we value highly the options for instantaneous and simultaneous access at any time.

Cloud-based applications – those stored and managed by massive data centres run by the likes of Amazon, Google, Facebook or Apple – are providing step changes in the financial and environmental efficiency of delivering these services. But the centralising power of the cloud has its corollary in the dispersing effect of wireless networks and devices.

In wireless networks and devices we see fragmentation, duplication and a fundamental shift from mains power and green sources of energy to battery powered always-on devices. In environmental terms here lies the rub. Rather than the “aggregation of marginal gains” (the Sir Dave Brailsford strategy that has propelled success in British cycling), in which lots of tiny improvements add up to a large visible improvement, we are witnessing the aggregation of environmental disadvantages from billions of low-powered but fundamentally energy-inefficient antennas and devices providing the ‘last metre’ connectivity to global networks.

Wireless networks and devices, technologies that should drive sustainable development, are turning into energy-consuming monsters.<

See on www.theguardian.com

Renewable Energy or Efficiency for the Data Center: Which first? #GreenComputing

See on Scoop.itGreen & Sustainable News

New advancements in green technology and design are making the idea of a green data center into a reality.

Duane Tilden‘s insight:

>Without doubt, the facility is a triumph of advanced environmental design and will serve as a template for future construction. Indeed, activity surrounding renewable-based data infrastructure is picking up, with much of it being led by the burgeoning renewable energy industry itself. VIESTE Energy, LCC, for example, has hired design firm Environmental Systems Design (ESD) to plan out a series of data centers across the U.S. that run on 100 percent renewable energy. A key component of the plan is a new biogas-fed generator capable of 8 to 15MW performance. The intent is to prove that renewables are fully capable of delivering reliable, cost-effective service to always-on data infrastructure.

The question of reliability has always weighed heavily on the renewables market, but initiatives like the VIESTE program could help counter those impressions in a very important way, by establishing a grid of distributed, green-energy data supply. In fact, this is the stated goal of the New York State Energy Research and Development Authority (NYSERDA), which has gathered together a number of industry leaders, including AMD, HP and GE, to establish a network of distributed, green data centers that can be used to shift loads, scale infrastructure up and down and in general make it easier for data users to maintain their reliance on renewable energy even if supply at one location is diminished. In other words, distributed architectures improve green reliability through redundancy just as they do for data infrastructure in general.

But not everyone on the environmental side is convinced that renewables are the best means of fostering data center efficiency. In a recent article in the journal Nature Climate Change, Stanford researcher Dr. Jonathan Koomey argues that without populating existing infrastructure with low-power hardware and data-power management technology first, data operators are simply wasting precious renewable resources that could be put to better use elsewhere. For projects like the NWSC and VIESTE, then, renewables may make sense because they power state-of-the-art green technology. But not as an industry-wide solution–renewables won’t make sense until hardware life cycles run their course.<

See on www.itbusinessedge.com

Green Computing – Business migration to Data Centers and the Cloud

>With the cloud computing scenario, time and power savings mean everything, in these huge scales, running hardly used servers is effectively throwing money away as well as annoying the environmentalists. In this scenario resource scheduling becomes amazingly effective. so we go back to our 5:30pm shutdown, but on this occassion the technology hosting the Virtual infrastructure kicks in, instead of sitting there half used, it will begin to migrate VM’s that no longer require resources to the same host, so although as a whole they may be using quite a lot of resource, it will enable hosts to be powered down and be sat in a ‘power saving’ state awaiting resources requirements to increase and as such power up from their slumber. Again, getting to the scale of this, you could effectively save power on as an example, 100 hosts – these hosts being the beefiest and most cutting edge servers available, are going to have requirements for a large amount of power, effectively turning these off when not used, is a god send to the idea of cloud computing. Why leave the light on in the attic if you have no intention of going there?

So that is effectively how these datacentres work from a green perspective, but from the company utilising this infrastructure, what do they save?

  • Downsize or offset office space.
  • Downsize onsite infrastructure requirements.
  • Expand the ability for users to work remotely (or globally dependent on your requirements)
  • Support the mobile workforce.
  • Reduce consumables use (printing, ink, paper, file storage costs).
  • Reduce hardware (desktop computers/server systems/UPS’s/cabling)


Green Computing – Office365 #greencomputing
.

The 21st century data center: You’re doing it wrong | ZDNet

See on Scoop.itGreen Building Design – Architecture & Engineering

Outdated designs are keeping data centers from reaching their full potential.

Duane Tilden‘s insight:

>One example of this are data centers that use raised floors for cooling. Many IT pundits have discredited this method of cooling as wasteful, including Schneider Electric’s territory manager for the Federal government and the ACT, Olaf Moon.

[…]

Cappuccio notes that engineering firms that are consulted to build data centers know about the newer and more efficient ways to do things. But rather than try something new, they prefer the stock standard cookie-cutter approach to creating data centers because it’s fast and easy, he said.

[…]

“I’ve seen a lot of data centers being built that are too big,” says Cappuccio. “We’re finding people with data centers that are three to four years old when they realise they have far too much space, and are still providing air conditioning to those areas. So they begin to shrink them, putting up walls, bringing down the ceiling so they don’t air condition the extra space.”

See on www.zdnet.com