Jump to content

Throttler

Members
  • Content Count

    3
  • Joined

  • Last visited

About Throttler

  • Rank
    Newbie
  • Birthday 01/01/1970

Profile Information

  • Interests
    SUPERANTISPYWARE
  1. Yes Nick, I'm sorry to have lengthened this debate, but the way it veered off the topic was quite amazing. Anyway, I'll try to refrain myself from now on, I hope the others do as well.
  2. It's fictitious statements like this that clearly expose just how Symantec has been able to dominate the commercial av market the last couple of decades. It's no wonder that small and large businesses that use their products continue to suffer as a result of their marketeering vs engineering strategies. While I do not have CONCRETE proof of the Symantec engine having very strong unpack engine, there is more than enough proof to show that Symantec has unparalleled polymorphic virus detection...Read below: http://www.wilderssecurity.com/showpost ... stcount=15 http://www.wilderssecurity.com/showpost ... stcount=29 Happy Bytes is Michael St.Neitzel, used to work at Eset, now works for FRISK/F-Prot. I will quote him here: "polymorhpic test my personal guess will be that symantec will be very good due to the fact that Peter Ferrie, Peter Szor working there" "Peter Ferrie and Peter Szor are known for years in the AV industry, so it's not an suprise that i picked out these names from symantec. Both are specialized with parasitic fileinfectors, which doesn't mean that they only doing this, but it's difficult to find these days expierenced people for this purpose. Running such files on an automatic replication system is a task what everyone can do, but analysing it in a proper way with Disassembler and finding tricky parts becomes a bit more difficult Homepage of Peter Szor: http://www.peterszor.com Homepage of Peter Ferrie: http://pferrie.tripod.com I just wanted to explain with this example that it wouldn't suprise me if symantec is the only one company in this tests which scores ALONE 100% in the polymorphic tests." This was back in 2006, when AV-comparatives' February 2006 comparative was yet to be published. And indeed, when the test was finally published, Symantec was the ONLY AV to detect 100% of polymorphic viruses. Of course, today, the other AVs have improved, but still very few AVs score 100% among polymorphics in the latest AV-comparatives test, and Symantec is one of them. Symantec received praise from an employee of another vendor. That itself speaks a lot you know. Better start owning up to the fact that Symantec is not composed of jackasses....Ask anyone in the industry, Symantec has very talented virus analysts in their team, and they are definitely starting to clean up their act as of late.
  3. Its amazing how this thread degenerated into a Norton bash thread from a simple request to change something in SAS. Really... Those of you still bashing Norton, go ahead and read the below thread: http://www.wilderssecurity.com/showthread.php?t=162429 Norton still has its faults, but it is MUCH improved over all previous versions. Not so much of a PC killer at all anymore! It is fast, has SONAR which is pretty good, and detection rates are also not bad at all. Also see below: http://www.pcworld.com/product/testrepo ... did=29902# PC World says BitDefender is the heaviest. If we put things overall, NAV still may come across as quite heavy, but since the resource test was performed at DEFAULT settings, and default settings vary for all the AVs, one cant be too sure That being said, NAV and NIS 2007 are very good products, and obviously Symantec is finally doing something right. Removing NIS to accomodate SAS is not a solution, NIS is a good product with VERY good detection rates. Two of the industry's most well-respected testing websites prove this. Nossirah, I belive winnow.oitc.com is not as reliable a test as you think. How does that guarantee in any way that it is indeed malware? It could very well be corrupted files, considering less than 50% means even 1 or 2 AVs' detections will be counted into the study. The study also includes lots of RISKWARE type applications. EVERY AV has a different definition of riskware. What Kaspersky or Avira may call riskware may not apply to AVG, for example. Maybe the results are not "fake", but they are definitely NOT accurate, due to corrupted files, riskware etc.. I receive samples occasionally from the VX community, and often quite a few of them are harmless or corrupt samples which are detected by other AVs due to deliberate or inadvertent (non-deliberate/technical) reasons. And if you want to know how I know they are harmless/corrupt, the answer is that an AV vendor tells me. Now who exactly is out of the question. So you're saying AV-test.org and Andreas Marx is POS. Tell that to the AV industry. Have you ever accounted that sometimes VirusTotal does not detect samples which an AV installed on your computer would? There are many reasons for this, maybe due to implementation, updating or some others as well. Also, since NAV does not update as frequently as KAV, for example, it could simply be that the signatures are added too late to catch the threat. By the time the user gets it though, in most cases NAV would have updated enough. Nobody ever denied that, not even AV-test.org. Do you really think that detection rates are the one and only priority for someone to choose an AV? If your AV detects 99% of malware and does not have support worth a damn, would you still use it? Not bashing AntiVir here, but choosing an AV depends on a lot of factors such as cost, speed, resource usage, even the GUI, support, virus submission and analysis service, features, functionality, compatibility and a lot more. A lot of people run after detection rates like its the saviour of the world. AntiVir's forum based support is not the best, its decent, even good, but not the best. If your 99% detection AV caught just one virus and your AV vendor does not wish to support you and add detection, then your entire 99% detection rate has essentially gone to hell... Sure, AntiVir is a good product, I like it myself, but using an AV based purely on detection rates is like giving partial treatment to the class bully because he is good at academics... I used to have KAV based AVs, and then switched to BitDefender, and in my sample sets (I have hundreds), I see AVG (Internet Security, paid edition) detecting more than BitDefender....But that doesn't mean anything, nor will it ever mean anything. What matters (to ME) is the support and virus analysis service, which both AVG and BitDefender are pretty good at. Maybe this was true in the past, but today, it is an AV with a very strong unpack support and unparalleled polymorphic virus detection in the industry... Agreed. Believe it or not, use Avira/Kaspersky/whatever or not, there are THOUSANDS of malware samples that EVERY AV is missing out there, and when the market share of one particular AV is higher, they are hit by undetected samples harder, and hence the stereotype "NAV sucks" is formed... I cannot and will not agree with that statement with regards to NOD32. Eset always add samples from well-known organizations, but NEVER from its PAYING USERS. Go ahead, buy a license of NOD32, send them a sample from a personal email ID (rename yourself if necessary) and see what happens. Eset has even admitted this, they are mostly concerned with adding samples from sources like AV-test.org, AV-comparatives and MIRT. So tomorrow if a user gets infected, Eset is not going to help worth a damn unless you start making a hue and cry on the official forum. See through the NOD32 official forums, MANY people have complained about this. Cherry picked malware? From where? Even if you collect samples from each and every AV vendor and put up all the AVs against it, it becomes a pretty random sample set. Do you have any idea where AV-test and AV-comparatives are getting their samples from apart from AV vendors? I don't either. Judging by your statement, you are saying that VirusP's virus.gr tests are reliable and trustworthy because they are performed by an independent VXer with no affiliations whatsoever. And if you don't know, AV-test and AV-comparatives are 100% independent, and do not put any bias on the results. And in the same way, Andreas Marx and Andreas Clementi are the same type of malware hunters, except that they know to separate harmless and corrupted crap files from real malware. Maybe their sources are different, maybe their samples come from a different section of the world. If you put it this way, you are again saying that virus.gr and malware-test.com are reliable, as malware-test is using honeypots to get its samples and VirusP is independent. Malware-test's honeypot comes from Chinese, Taiwanese and English sources, so the results are strange. At the same time, malware-test's honeypot is using a system similar to MIRT, and they have not sorted the corrupted/garbage/harmless files out... Okay, NOD32 getting 49% is reliable, and Kaspersky scoring less than AVG in malware test is reliable by your analogy... The fact is, just because YOUR sample set doesn't show the same results as others, this doesn't mean the OTHER tests are faulty compared to yours. There are many Chinese AV tests out there which show NOD32 and Kaspersky to be very bad compared to others (Malware-test is an example), and their sources of malware are all different. Basically, every testing organization is doing the same thing as MIRT, but due to regional differences you will see differences in detection rates. At the same time, most of these "tests" have many corrupted samples which alter the test results heavily. Lets face it, neither you nor I have the resources or tools required to sort out crap/garbage files from real malware, so in most cases we have to rely on what the AVs tell us. Try to be more open-minded, and think about it. ehhem....Stefan Kurthzhals from AVIRA did not approve of the test. Do I need to remind you that AVIRA engine is used in WebWasher, which is one of the highest scoring AV? Let me quote Stefan Kurtzhals here: "The graph is interesting, but the test samples contain lots of false positives and garbage executables. Also, the test doesn't show you the high false positive rate some of the scanners have, they are running in "paranoid" mode on Virus Total. Also, the scan results are incorrectly rated. For example, information messages in the scan log are rated as detection." Maybe not AV-comparatives, but the PC Welt tests of 2006 by AV-test.org surely used samples ONLY FROM 2006 during the testing. And Symantec still scored very well in that test.... A thing EVERYONE should know about the OITC chart is that it was NEVER INTENDED TO LOOK AT THINGS FROM A HOME USER PERSPECTIVE. If you noticed, it heavily favours Gateway level AVs and FP giants. The difference between AntiVir and WebWasher just proves this. In a gateway, paranoid level scanning is very important, i.e. security>false positives. For this reason the risk taken is minimum and hence for the sake of potentially improved security many AVs used on gateways are FP giants and also have stupid packer detections. Such examples are below for WebWasher: In many cases, these cause lots and lots of FPs. Heuristics based and packer based FPs apply to many AVs, including VirusBuster, Sophos, Fortinet, eSafe, VBA32 etc. Why? Because this is paranoid protection. Even if only one scanner picks something up, it is counted into the study, so there are potentially lots and of FPs in the OITC study. But this won't matter for gateways, because gateways need paranoid protection. AVs used in Gateways also detect lots of riskware a with happy triggers, and AVs popular in home user market will probably ignore this riskware because from home user perspective many such tools are pretty harmless. So as you can see, this is not a black and white thing. In retrospect, the OITC results are meant to gauge the zero day protection ability of various AVs at the GATEWAY level. Due to different requirements and characteristics at the home user level, these results do not apply for home users. I hope my post wasn't too offensive, I realize I may have behaved rudely in a few lines, and I'm sorry for that. I do appreciated the good work MIRT is doing, but it would be wrong to call OITC suitable for home users. Even the maintainer of the OITC results will tell you the same thing: That they were never interested in looking at things from a home user perspective.
×
×
  • Create New...