Fake Tech Support Scams

Today I received a call from a friend informing me that they had malware loaded on their computer and that every time they were using their browser regardless of which browser they were using they were getting this nag screen claiming that Microsoft was informing them that their computer was infected by malware and to contact Microsoft to have the problem resolved.

The quick solution is to use a malware detector and remover, there are a few free ones available online.

If that doesn’t clean your computer then you may have to take your computer to a technician to remove the malware from your computer depending on the complexity of the malware that it may be an expensive venture with no guarantees that all has been removed.

Manually removing viruses, rootkits, malware and trojans can be very time consuming, I have known people to run bills over three hundred dollars.

So the cheaper solution is reinstalling the operating system, and reinstalling all your software (pain in the ass).

It is always good practice to backup you datafiles for two reasons, the first in the event that you have to reinstall your operating system the second hard drive failure.

Below is a link that will show you what this Microsoft popup window looks like.

(A dead giveaway is that if you are using a Linux os or mac os and you get a Microsoft message ) then you are being scammed … lol

The other version of this fake tech support scam is the telephone call, with this scam you are called on the telephone telling you that your computer is infected and that they can fix your problem. (Yeah Right!),

For the Microsoft Tech Support scam here are two images that you might recognise if you have encountered the experience.

Microsoft fake tech support 01

Then you may choose to click okay and immediately the next pop-up comes up.

Microsoft Tech Support Scam popup number two

If you want to know more about these Tech supports scam check out the following video.

Part II Eliminate the Fear of Browser Cookies install TOR

By André  Faust

One of the biggest concern for those who do not know how browsing and the internet work.

There is a lot of bad information in regards to cookies, and the browser’s cache. Unfortunately, misinformation of this kind leads to some anxiety about cookies and other digital items downloaded into your computer’s cache.

With the early browsers, not with cookies but with JavaScript underhanded activities were common.  Today’s modern browsers are programmed to recognize malicious JavaScript and won’t allow the script to run or to be downloaded.   So what is loaded in the cache is really the least of a users worry.

Anything that is loaded in the cache whether its Cookies or other software is loaded to increase the efficiency of the browser.  Disabling the cache will affect the performance of your browser.

Malware, viruses and other undesirable unwanted software are usually bundled with other software which you consent to install.

It is common that any software, addons, etc that are offered for free has other bundled software included.  Even a reputable corporation like Adobe, when you install their flash plugin, and if you don’t check off any to install Norton security.  The installation will install Norton security.

TOR is a browser that has been designed to help you browse anonymously if you use TOR and follow TOR’s instructions.

In   Part I Anonymous Browsing The Experiment discusses in depth what information your computers gives out and what information it gives out.  In addition, Part one discusses the strength and weakness of TOR and VPN’s Virtual Personal Networks.

The installation is pretty straight forward for what every the platform that you are using, Windows, Linux and Macs.   It is just a matter of going to their site and follows the installation instructions.


Part I Anonymous Browsing: The experiment

By André Faust

There are many stories about the information that you give when you are online, some of the stories are accurate and others are exaggerations.

To separate fact from fiction, I created a web page located on my site to collect any information from anyone landing on the page. The page does not keep any of the information that it displays. Once the user closes their browser the information is lost forever.

Using the page three tests were performed, the first test was with Firefox, the second with the browser TOR and the final test was with a VPN (Virtual Personal Network).

The first two test were strictly browser tests. The VPN tests was a hybrid test, it tested both the browser, information that is sent from the user computer online when they are not using a browser.

Two computers located on the same network were used to conduct this experiment/test. Computer A has geolocation software installed while computer B hasn’t.

Most computers don’t have geolocation software or hardware installed so for those, accurate geolocation is highly unlikely. The test will show the difference in accuracy between the both computers.

However, the tests did show that it is possible to get a person’s location under the right conditions.

What the experiment revealed is that when you are using a standard browser that has geolocation API (Application Program Interface )the browser will ask the user for permission.

Not to create any confusion, there is a difference between geolocation software and hardware and a browser geolocation API. The difference is that the software and hardware can give your exact location if your computer is communicating to the internet through a browser or not whereas the browser geolocation API detects if a request is made from the website for your geolocation.

Not all browsers have geolocation API built into them, Chrome is an example of a browser that doesn’t have the API included. When a browser does not support a geolocation API, the user’s location is not given.

The Result of the experiment/test is that you can browse with anonymity with TOR, with Firefox, Your IP address is given, The IP address that is given is the IP address that your internet service provider assigns to your router. While the IP address will identify the Internet Service Provider and the Internet Service Providers Address the IP Address will not give your location. The only way that that could be found is if someone has the legal authority to request to request that information.

Outside of geolocation, the results did show that when you visit a site, you give your IP address, you’re the browser that you are using, some of the plugins your using, your screen resolution and your operating system.

If the site requests your Geolocation, the user’s browser will ask permission to proceed, if the user declines then no information is broadcasted.

The VPN tests were successful at spoofing the IP but failed at providing any of the other information, like browser brand, plugins, screen resolution and operating system. The VPN to spoof the IP address is independent of the browser.

Outside for someone to do statistical analysis, one can browser without giving out information that can Identify the user.

Most of the personal information that a user gives is when the user fills out online forms. That is where the majority of the personal information is given, so you really have to trust the site that they will not sell or give that information out.

The focus is on information that is broadcasted while online. Cookies and browser cache was not looked at as part of the experiment/test. That is another topic. A quick word on cookies, most of the cookies are downloaded to your browser’s cache are helpful to the user and when cookies are disabled the user will lose some of the advantages of cookies.

The page that was used for this video can be found at http://jafaust.com/whoami/ It is a good tool to use because it does all three finds the users IP address, the location of the Internet Provider as well as a whois to identify the Internet Service Provider and location of the Internet Service provider

Gopher Predates the Internet as We Know it But still useful.

By André Faust

For those who have been around since the beginning of the internet Early 90’s will remember “Veronica, Archie.  Veronica was the search engine for the Gopher Protocol, and Archie was the search engine for FTP (File Transfer Protocol).  In its early days Archie, Veronica and gopher were available on university campus to allow access to research.

Most of the information that you would retrieve would text based documents using a modem anywhere between 300 baud to 2400 baud.

While gophers has taken a back seat to the modern day internet it still can have its place, for activists, or groups who only wish to disseminate in formation with in the group.  The only catch here is that you have to set you computer as a gopher server as well a gopher client.

The advantage of using gopher is that it is very light weight and because most of the information is text based it does not consume as much band width as the HTML (Hyper Text Markup Language) protocol which great if you using a mobile where the providers nickel and dime you once you go over the allocated bandwidth.

Cameron Kaiser from the Overbite project explains why Gopher is relevant even in today’s world.

Why is Gopher Still Relevant?

Cameron Kaiser, from the Overbite Project

Most people who “get” Gopher are already using it and instinctively understand why Gopher is still useful and handy. On the other hand, people who inhabit the Web generation after Gopher’s decline only see Gopherspace as a prototype Web or a historical curiosity, not a world in its own right — and more to the point, being only such a “prototype,” there is the wide belief that Gopher plays no relevant role in today’s Internet and is therefore unnecessary. This has led to many regrettable consequences, such as the neglect of servers and clients, or even active removal of support.

However, there is much to be gained from a heterogeneous network environment where there are multiple methods of information access, and while the Web will confidently remain the primary means of Internet information dissemination, there continues to be a role for Gopher-based resources even in this modern age. Gopher and the Web can, and should, continue to coexist.

The misconception that the modern renaissance of Gopherspace is simply a reaction to “Web overload” is unfortunately often repeated and, while superficially true, demonstrates a distinct lack of insight. From a purely interface perspective, there is no question that Gopher could be entirely “subsumed” under the Web (technical differences to be discussed presently). Very simple HTML menus and careful attention to hierarchy would yield an experience very much like a Gopher menu, and some have done exactly that as a deliberate protest against the sensory overload of modern Web 2.0.

Gopher, however, is more than a confederated affiliation of networks with goals of minimalism; rather, Gopher is a mind-set on making structure out of chaos. On the Web, even if such a group of confederated webmasters existed, it requires their active and willful participation to maintain such a hierarchical style and the seamlessness of that joint interface breaks down abruptly as soon as one leaves for another page. Within Gopherspace, all Gophers work the same way and all Gophers organize themselves around similar menus and interface conceits. It is not only easy and fast to create gopher content in this structured and organized way, it is mandatory by its nature. Resulting from this mandate is the ability for users to navigate every Gopher installation in the same way they navigated the one they came from, and the next one they will go to. Just like it had been envisioned by its creators, Gopher takes the strict hierarchical nature of a file tree or FTP and turns it into a friendlier format that still gives the fast and predictable responses that they would get by simply browsing their hard drive. As an important consequence, by divorcing interface from information, Gopher sites stand and shine on the strength of their content and not the glitz of their bling.

Furthermore, Gopher represents the ability to bring an interconnected browsing experience to low-computing-power environments. Rather than the expense of large hosting power and bandwidth, Gopher uses an inexpensive protocol to serve and a trivial menuing format to parse, making it cost-effective for both client and server. Gopher sites can be hosted and downloaded effectively on bandwidth-constrained networks such as dialup and even low-speed wireless, and clients require little more than a TCP stack and minimal client software to navigate them. In an environment where there are cries for “green computing” and “green data centres,” along with large-scale media attention on emerging technology markets in developing nations and the proliferation of wireless technology with limited CPU and memory, it is hypocritical to this author why an established protocol such as Gopher would be bypassed for continued reliance on inefficient programming paradigms and expensive protocols. Indeed, this sort of network doublethink has wrought large, unwieldy solutions such as WAP, a dramatic irony, since in the case of many low-power devices such as consumer mobile phones, the menu format used on them is nearly completely analogous to what Gopher already offered over a decade earlier. More to the point, few in that market segment support the breadth of WAP, and those that can simply use a regular Web browser instead.

Finally, if Web and gopher can coexist in the client’s purview, they can also exist in the server’s. HTML can be served by both gopher servers and web servers, or a Gopher menu can be clothed in CSS, translated to HTML, and given to a web browser (and in its native form to a Gopher client). This approach yields a natural and highly elegant consequence: if you don’t want to choose strictly one way or the other to communicate to your users, choose neither and offer them both a structured low-bandwidth approach or a higher-bandwidth Web view, built from the same content. The precedent of a single serving solution offering both to both clients has been in existence since the early days of the Web with tools such as GN, and today with more modern implementations such as pygopherd. Gopher menus are so trivial to parse that they can easily be HTML-ified with simple scripts and act as the basis for both morphs; what’s more, their data-oriented approach means they require little work to construct and maintain, and content creation in general becomes simple and quick with the interface step already taken care of. Plus, many servers easily generate dynamic gopher menus with built-in executable support, providing the interactive nature demanded by many modern applications while still fitting into Gopher’s hierarchical format, and virtually all modern Gopher servers can aggregate links to Web content to forge bidirectional connections.

Modern Gopherspace represents the next and greatest way for alternative information access, and the new generation of Gopher maintainers demonstrate a marked grassroots desire for a purer way to get to high-quality resources. Not simply nostalgia for the “way it used to be,” modern Gopherspace is a distinctly different population than in the mid 1990s when it flourished, yet one on which modern services can still be found, from news and weather to search engines, personal pages, “phlogs” and file archives. It would be remiss to dismissively say Gopher was killed by the Web, when in fact the Web and Gopher can live in their distinct spheres and each contribute to the other. With the modern computing emphasis on interoperability, heterogeneity and economy, Gopher continues to offer much to the modern user, as well as in terms of content, accessibility and inexpensiveness. Even now clearly as second fiddle to the World Wide Web, Gopher still remains relevant. — Cameron Kaiser