Revive is a project, which I started about 9 months ago but recently found time to put some finishing touches and open source it. It's not a linter aggregator; it's a framework which provides tools for reducing the friction for development of custom rules for static analysis of your applications.
And no, the individual rules in revive do not parse the files. There's an abstraction on a higher level which parses the files ones. Each rule may request type information for the package which is then cached and reused across the invocations as well. That's how revive manages to improve the performance of golint to this extent.
I wonder where you draw the line between a "linter aggregator" and a "linter". golangci-lint incorporates all the rules themselves, though it imports the linter logic as libraries, so I'm not sure that it's fair to call it an aggregator. Gometalinter runs linters as child processes and I don't think it contains any linter code, so it's a pure aggregator.
My point is that while your project is admirable, the Go world isn't large enough for so many linter projects. Personally, I just want one good linter that is maintained and that incorporates all the rules I want.
One reason golangci-linter is faster is that it shares the same in-memory representation of the linted packages between all the linters. From what I can tell, Revive requires that each rule parse the file itself, which is the same design gometalinter use, and it will absolutely kill performance for anything that needs to operate on the full AST.
Also, the Go community does not need a bunch of competing linter tools. For one, at means that tools like Visual Studio Code's Go plugin [2] will have to build special support for each linter tool. Gometalinter was nice because it built lots of rules (including golint) into a single tool, so you just needed that one tool.
I agree with the others about configuration. I think a lint tool should have a configuration work out of the box that reflects Go standards that everyone, without doubt, uses. More controversial conventions such as requiring all exported functions to have comments don't need to be included.
Anecdotally, my company has used Ruby since around 2004, and Bundler since its first release, and we never had any issues. That doesn't mean nobody has ever had any issues (clearly! [2]), but it generally seems like Ruby package management is a solved problem, and that it would be a good model for any dependency system to use.
Bundler does have one feature (or misfeature) that Russ Cox criticizes: "bundle install some_gem" can cause unrelated gems' minor (or maybe it's minor) versions to be upgraded even if you don't tell it to. I've never liked that, and would much prefer to use "bundler update" to perform explicit upgrades. But I don't think that behaviour is at all tied to its solver, or that MVS is needed to fix it.
The top 3 items there are slow because they're:
1. 'source-exists' (~6s) which will do network traffic to find if a project exists to be downloaded or is in the cache; it's network io heavy in most cases.
2. list-packages (~3s) which parses the downloaded source code for import statements to find further dependencies; disk-io heavy + go loader has to do some work
3. gmal - GetManifestAndLock (~2s) which looks for lock files, including of other dependency solvers; disk io mostly I think
Any system designed with the constraint that it cannot use a centralized registry / list, must be compatible with things not using this system (and so must parse their code), etc will have these problems regardless of the algorithm.
Those steps are all doing network/disk-io/go-parsing, and none of that is SAT solving.
I don't think vgo has these problems because vgo is built by the go team and can dictate far more, such as the use of a centralized repo, that all dependencies must use vgo, etc.
The fact that dep parses import statements (as does Glide) is something I've never liked. It means that if you run "dep ensure --add" on something not yet imported, it will complain, and the next "ensure" will remove it. This is never in line with how I actually work. I need the dependencies before I can import them! There's no editor/IDE in existence that lets you autocomplete libraries that haven't been installed yet.
It also means that "dep ensure" parses my code to discover things not yet added to Gopkg.toml. That's upside down to me. I want it to parse its lockfile and nothing else; the lockfile is what should inform its decisions about what to install so that my code works, my code shouldn't be driving the lockfile! If I try to compile my code and it imports stuff that isn't in the lockfile, it should fail, and dep shouldn't try to "repair" itself with stuff that I didn't list as an explicit dependency.
I'm sure there are edge cases where the current behaviour can be considered rational, but I don't know what they are. As you point out, dep has to do a lot of work -- but why? Running "dep ensure" when the vendor directory is in perfect sync with the lockfile should take no time at all, and certainly shouldn't need to access the network. Yet it takes the same amount of time with or without a lockfile.
That is, in more proper words, first of all, language specific package management is mostly a solved problem. There are possible improvements, and maybe vgo realises some of them, but that's mostly a bikeshedding problem. What users need is to be able to declare what packages they need, in what version range. And their search for an alternative to fetching source repos is like searching for the cure to ilnesses that already have proven vaccines: you just put up a server and fetch from there. Decentralisation? Put up mirrors.
Then the way this vgo thing happened is the opposite of nice. Tools already existed, and they had to conform to the restrictions of the project (like the, excuse me but, idiotic idea of a $GOPATH); but then one of the Go deities come around and goes, um, I deprecate all of you, break the rules that you had to comply, and because I-am-who-I-am, this is the way to go.
Now Cox's solution might indeed be better (though I think it's an overkill, and do agree to Boyer's articles I read), but this is not the way to run a community. From my PoW, this would not preclude me from using the language if it came up, but I'd definitely be reluctant to send patches to them. Communities with deities and dogmas are always unhealthy. Those that also, additionally, are deep down in yak shaving and bikeshedding are even more so.
vgo is actually much, much simpler than dep. The sheer number of words in Russ Cox's series of blog posts belies its simplicity. vgo doesn't need a SAT solver. If you look at many of the issues dep is struggling with, they're related to solving N libraries with transitive dependencies up the wazoo.
Cox's long treatise reflects the complexity of the problem space. Developers tend to brush off package management as being simple. But once you include range-based version constraints and transitive dependencies, it gets a bit messier. Look at NPM and Yarn; they're still struggling to get all the details right. On the other hand, there's Ruby's Bundler. It came out in 2009, RubyGems in 2004, and I've never had a single issue with the toolchain (other than messing up my own constraints). I don't know what kind of magic elixir they were drinking, but somehow those guys managed to nail it from day one.
So how do you find bugs? You run tests.
The same can be true if you're working with package management. Most bugs aren't found by the dependency tracking system anyway. Having good tests will tell you about incompatibilities that upstream didn't even know about.
If you're not using exactly the same versions that the maintainer used, you need to run tests.
Maybe people currently don't run tests often enough? But this suggests a different approach to software robustness than comparing version numbers.
Historically, most languages (C, C++, pre-Maven Java) haven't had package management at all, and so dependencies have typically been managed by vendoring the code (or JAR files). JAR files worked okay, but vendoring incurs maintenance overhead that isn't acceptable in today's environment. git submodules are theoretically a solution, but also high-maintenance.
I have to admit I'm a bit confused as to why the dependency resolution algorithm in dep is seen as slow. The speed of the solver is not a problem in any other package management system I've seen. If it is indeed the solver that is the problem (which, again, I'm skeptical of—I'd have to see profiling data to believe it), then it could just come down to optimization differences between rustc and Go 6g/8g.
[1] https://gist.github.com/atombender/7c28f1d371fcb139e1e742a08...
If you don’t know what I’m talking about vgo requires the release to be tagged like “v1.0.3” which is not standard semver.
https://github.com/gogo/protobuf
https://github.com/olivere/elastic
It might seem strange that they lead with this new feature, but `yield_self` can greatly improve the Ruby chainsaw, and `then` makes it more accessible.
The style of writing a Ruby method as series of chained statements has a non-trivial effect on readability and conciseness. `then` lets you stick an arbitrary function anywhere in the chain. They become more flexible and composable, with less need to interrupt them with intermediate variables that break the flow.
I've been using `ergo` from Ruby Facets for years ... largely the same thing ... and the more I used it the more readable I find my old code now. Funny how adding one very simple method can have more effect than so many other complex, high effort changes.