When code is developed we are (due to fundamental physical properties of the universe) specifying how we wish systems to operate in the future. This is likely to remain the case until we work out how to send code into the past (and if we could do that, why did VB.NET happen?). This means that code is fundamentally a forward looking instrument. Anything in the code that isn’t about the future behaviour of a system is noise that distracts from the intended purpose. This increases the cognitive overhead of dealing with the code and should be minimised or eliminated.
The history of the codebase is intrinsically backwards looking. It is of critical importance in maintaining systems, but it is not appropriate to track within the code itself. It is also managed automatically by version control systems, so any inclusion in the code will be a low fidelity manual copy of information already captured by a dedicated system. This kind of history is commonly seen in two ways, a change tracking comment block at the start of a file or inline comments that indicate specific changes.
A tracking block on a file is a remnant of the dark and distant past before the availability of source control systems. It is no longer the 1970s, there is no excuse to try to track the information where it is difficult to track, expensive to maintain and supplanted by other data sources. Version control can tell who the authors of a file are, what they modified and when in a much more comprehensive fashion than a tracking block. They’re noise, kill them.
Inline change tracking comments are, if anything, worse. These put the noise right in the details of the code where the focus should be purely on the behaviour. Further it is never particularly clear what warrants such changes and how multiple competing changes should be documented. This information is all readily in source control if it becomes relevant. Leave it there.
It should be notes that there can be valid comments that explain the current state of the code where the approach taken is informed by history. Such comments are not a change log but an explanation of why the code is how it is. This can be a warning that certain things must be considered to avoid defects that have previously encountered or an indication that what appears to be a better solution has been tried and found wanting. This is an encapsulation of lessons of the past relevant to the future maintenance of the code and hence are justifiable.
Commented out code is also to be strongly avoided. Such code is noise, distracting from the code that is actually performing a function. Historic code is tracking by version control from which it may be recovered if necessary. Left in place it may falsely suggest that functionality can be recovered quickly. In many cases this is untrue as the code will need many adjustments to adapt to modifications in the system made subsequently to it being commented. It will also result in false positives when performing text searches on a codebase and otherwise clutter the code.
Code that is still included but never invoked is also problematic. This must still be dealt with by tooling, increasing load and compile times and increasing overhead. It must modified to adapt to system changes despite adding no actual value to the system. Additionally can increase the attack surface of your system, particularly if it has some form of interface.
Both commented out and unused code should be removed. If it is subsequently required it can always be recovered from version control. Any costs to fit the code to the current state of the system can then be incurred only as necessary.
It would seem that adding comments to code is an almost cost free way to improve the expressiveness of code to one of its major audiences (developers). Code covered in comments might be assumed to be more readable than that with few or no comments. The truth however is more complex. Commenting is good only in moderation and is often applied as a cargo cult practice or to attempt to alleviate poor code structure. This form of commenting impairs maintainability and is therefore actively harmful.
Comments are neither free to produce nor to consume. Writing a comment that adds meaning to the code should not be trivial. If adding such meaning is trivial then you don’t need a comment, you need better code. Trivial comments are almost always indicative of a need for better naming of variables and methods or changes to the code structure. Valuable comments add context that is not readily expressible in the code itself. Inline comments in particular are almost always better replaced with a rename or extract method refactoring.
Once a comment is written there are costs incurred due to its presence in the codebase. I refer not to the negligible cost of storing, moving and processing such comments. Rather there is a developer overhead in reading and comprehending comments and in modifying them when the code changes. If comments repeat what the code already says you are asking developers to essentially read everything twice. Further such comments must be maintained whenever the code is modified. This is often impractical to do during iterative enhancement or correction of code, leading to an overhead required at the end of any code modification process where a developer must go back and essentially repeat themselves in another language. If this adds nothing that could not already be determined then this is wasted effort. Failure to do so however leads to discrepancies between comments and code which lead to confusion and ambiguity in future maintenance efforts.
Possibly the ultimate example of cargo cult commenting is the GhostDoc tool. This generates XML code comments by examining the code structure and converting names in an attempt to describe it. As such a tool cannot understand the operation of the code or the context in which it operates there is no conceivable way this can possibly add value. A comment of “Gets the Widget” of a method called GetWidget is nothing but noise, adding nothing and consuming developer time to read it. Worse such pointless comments make it look to a casual observer as if someone has provided descriptive information. Reading a few such comments will train developers to ignore the project comments which may lead to genuinely important information being overlooked. As such use of this tool actively degrades your codebase.
This should not be seen to be an attack on tooling that assists in generating XML commenting in general. The flaw in GhostDoc is the attempt to replicate information in code in comment form automatically, which adds no value. Tooling that handles the structure of XML comments so developers can concentrate on writing useful comment content is a valuable feature. It’s the extra behaviour that’s harmful here.
I have in the (deep and distant) past used and recommended GhostDoc. I apologise and unreservedly withdraw that recommendation.
So where are comments of sufficient value to justify themselves? There are a few key cases:
On the public interface of a module. Consumers of the module are unlikely to want or be able to look into the implementation so commenting that expands on the information given by the type, member and parameter names is likely to be helpful. The danger here is falling into what I call the “MSDN Trap” where the majority of your comments are just repeating what the names already tell you. You don’t need to comment everything if the you have nothing interesting to add.
On private/internal members and types where there is something interesting to add. It is reasonable to assume that members that are only used within a component can be considered in context with other elements of that component. As such you do not need to document what is clearly expressed by proximate code. This still leaves context not expressed in the code. It is also worth considering the proximity of the member to its usage inside the component. Things used inside the same type or codefile are less likely to require commenting than those used elsewhere within the component.
Inline comments are almost always inappropriate. If you need to explain context it should probably go in a XML code comment on the member. If you have too much context for this your member is probably too large and should be broken up. If the comment describes the function of a statement or conditional you are generally better off extracting a method or property with a name that describes the behaviour it is encapsulating. Inline comments should be reserved for truly exceptional cases for which these mechanisms do not apply. The most recent instance I have encountered of this was a case using expression trees where removing the (at first glace unnecessary) “== true” comparison resulted in incorrect evaluation of the resulting expression tree. An inline comment to the effect of “this is necessary, don’t remove it” was appropriate. This is necessary for the code to be maintained properly in future but is not externally relevant and hence is not appropriate in an XML code comment.
What I should have made clear with my comments on GhostDoc is that the blame for keeping automatically generated comments rather than expanding upon them falls on the developer using the tool. You don’t get to pass responsibility because a tool created something if that something can be readily modified. Ultimately developers and organisations are responsible for the consequences of the tools they choose to use and how they make use of those tools.
That said I maintain that generating comments by converting names into phrases is a poor behaviour because it encourages bad practices. My experience (and I must admit at times my previous practice) is that developers will generally run GhostDoc on a type to get the XML comments and then not edit them. I prefer alternate tooling that makes creation of XML comments easier but which does not attempt to pre-fill as this makes it more obvious that content must be generated by a developer. Generally this also doesn’t imply that you must comment parameters or return values where such commenting adds no value.
There’s an argument that you need to be constrained in what you post online and who you associate with. My view on this is very much that expressed here in XKCD. I have no interest in being a bland corporate drone. Pretty much everyone worth talking to online has controversial opinions and it’s not necessary to agree with all of them in order to have a meaningful conversation.
I’m aware that I am extremely lucky that I’m able to communicate in a mostly unrestricted format without significant fear of legal, financial or social consequences.
Although I’m not afraid to take positions I do believe that it’s not necessarily appropriate or desirable to discuss all topics in all forums. It’s legitimate to have multiple facets to your online identity and to restrict where you have particular interactions. I’m much freer in expression of opinions on Twitter where I tend to be significantly less serious. On my blog I tend to be more serious and impersonal. I mostly discuss work topics in communications internal to my employer, partly for legal reasons but also because I don’t believe it’s generally ethical or interesting to do otherwise.
I’m not going to pretend I’m not a person with (slightly) diverse interests and opinions. I may not choose to share every aspect of my life but those aspects I choose not to share are not going to chosen on the basis of the fear of consequences. Any individual or organisation that chooses not to associate with me on that basis is not one I want to associate with anyway.
When building a .NET application that needs to talk to a SQL database it’s typical to use an Object Relational Mapper. However for applications of significant scale I’m yet to find an ORM that fits well to every part of the system. Modern ORMs provide a lot of sophisticated capabilities but can have higher implementation costs than are warranted for many of the application’s data access requirements. Simpler frameworks such as micro-ORMS have much lower overheads but correspondingly fewer features. Choosing one inevitably means a poor fit in some area.
This makes the assumption that we must make a single data access choice, and this is not necessarily so. You don’t need to apply full formal Command Query Responsibility Segregation in order to differentiate between query and update operations in your system. By making this distinction you open to the possibility that you can make different data access choices in different parts of your system.
One approach is to use a full traditional ORM such as NHibernate or Entity Framework to handle the application of business logic (the command side) and use a micro-ORM to handle the query side. For straight data access the power and breadth of SQL is often overlooked by developers who prefer code solutions such as LINQ. I’m an huge fan of LINQ but sometimes it will be simpler and more efficient to go straight to SQL. A thin query layer to support the primary data access of your application may be such a case. This reduces the number of layers in your application and makes this behaviour simpler and easier to understand. At the same time the business logic can continue to use a full ORM to take advantage of the support they provide for this kind of use.
There are a few potential objections to this approach:
It requires knowing more than one framework. If you seriously consider knowing a lightweight micro-ORM in addition to your traditional ORM to be an excessive burden then might I suggest software development is not for you. Being concerned that you have too many dependencies is a valid point and you should not add new frameworks without proper consideration. This must be balanced against the cost of pushing everything down a single path. Use of a suitably constrained set of dependencies is likely to be simpler than pushing a single framework to do widely disparate things, an approach likely to require extensive knowledge of the minutia and extension points of the framework.
You are tying yourself to a single DBMS. Many micro-ORMs deal in SQL (eg. Dapper) which means you are likely to be writing SQL for a specific DBMS rather than taking advantage of the abstraction a framework such as NHibernate will provide to your query. This is balanced by a couple of points. Firstly you are highly unlikely to actually replace your DBMS and if you do the cost will already be significant. In practice most line of business applications are targeted to a DBMS and it is legitimate to take advantage of this if the business is made suitably aware of the benefits and costs of doing so. Secondly micro-ORMs lack the overhead of translating an alternate representation into SQL and hence for performance critical applications may be a better balance between development and runtime costs. (This assumes that efficient SQL can be manually written, ORMs are getting very good at doing this which may impact performance obtained for expended development effort).
You can take this further and have different data storage technologies for different uses. Options include document databases, key value stores, graph databases and many others. This is outside the scope of this post but it is always worth considering if you can obtain sufficient value to justify the learning, development and operational costs of such an approach.
When addressing a problem it is not unusual to refer to “the system” when discussing the intended solution. The problem with this is that it is inherently limited to the assumption that there is a single system that will effectively address the entire problem domain. Except for very small systems this assumption likely does not hold. It is possible to build a single system that can handle all the requirements of complex problem domains, but it’s unlikely that such a system will handle all the requirements effectively.
What this tends to lead to is that for a significant percentage of the requirements the nature of the system is imposing unnecessary development burdens. In monolithic “big ball of mud” systems all the requirements are intermixed. This tends to lead to having to touch many parts of the code to implement individual requirements. The corollary is that touching any part of the code is likely to affect many requirements.
Unfortunately applying and enforcing an architecture with clear separation of concerns may not be enough. Some elements of the system will need sophisticated infrastructure support, so your “good” architecture will necessarily involve a certain level of overhead to utilise. Other elements will not need or warrant extensive infrastructure. If you lock your system into one architecture you are mandating that every element of the system have an overhead determined by the complexity of the infrastructure needed by the most demanding component.
The obvious response here is that you don’t have to apply the same architecture to every element of the system. You can chose a range of styles to suit each part of the problem domain from simple CRUD to the most buzzword compliant SOA CQRS Event Sourcing distributed magic that you can apply. The design challenge is then to determine what styles best suit the various elements of your problem and to select a manageable set of architectural styles with which to implement the system.
If the system is of any significant scale you will then likely run into further issues. There are many instances where we will wish to manage the development and deployment of elements of the system independently. If everything is in one piece we must take care how we manage this to mitigate the risk that changes will be applied in an incomplete or uncontrolled fashion. At this point we can stop building a single system and build a set of related systems that cooperate to fulfil the business requirements. Each system can developed in relative independence provided that the points of interaction are clearly defined and relatively stable.
Building a set of systems also has the advantage that it makes the set of responsibilities of each system significantly simpler and hence each system will have a simpler domain. This means that it can be possible to consider the system as a whole because it does not require understanding the entire problem domain as a whole. It is also easier to divide development and it reduces the likelihood that developers working in different areas will make conflicting changes.
There is an initial overhead in terms of complexity of dividing a system and for smaller problems it is generally not warranted. There is a crossover point where the lower complexity of each component in a separated system outweighs the additional complexity this approach has over a monolithic system. It is therefore legitimate to consider whether this approach is necessary for your problem.
What is more clear cut is that having flexibility in architectural style within a system is beneficial. The overhead of pushing a one size fits all approach is significant. The additional complexity of supporting a limited set of approaches within a single system is minimal in comparison. It is not unreasonable to expect that a professional software developer can deal with using more than one approach within a system.
Abstractions are extremely useful things. A well designed abstraction can completely change the ease with which a system can be developed. Unfortunately abstractions are not always designed well. Many times they are missing, poorly conceived or altogether superfluous.
One particularly common failing is to layer additional abstractions on top of an existing (usually externally supplied) abstraction where the new additions are essentially duplicative or are more restrictive without being easier to work with. For some reason this seems to be particularly common with logging frameworks and Object Relational Mappers.
Generally the justifications for doing so fall into one of the following:
It hides the complexities of part of the system or an external dependency so that other code does not have to consider them.
It allows abstraction of an external dependency so that it can be replaced with an alternate implementation.
It adds functionality believed missing from the underlying abstraction.
The problem is that in appropriate circumstances these are all valid reasons for an abstraction. The problem is that they are often misapplied. Adding an abstraction is not without cost and a misapplied abstraction will end up costing more than any value it is claimed to provide.
To consider the proposed justifications in order:
Hiding unnecessary complexity is a hallmark of a good exception. However it should not be presumed that all complexities can be abstracted. This is a harsh lesson learnt repeatedly in many errors. Early distributed systems presumed that remote objects could be invoked in the same way as local objects so that client code would not know where the object being used resided. Some examples went so far as to propose separate servers for each type of object, with one server providing customers while another supported orders. In practice however it turns out that the complexities of remote communications are highly relevant and cannot be so easily discarded. Remote calls are orders of magnitude more expensive than local calls and subject to failure conditions not present when all the code is running locally. Abstractions that attempt to hide these complexities invariably leak and do so in ways that are brittle and difficult to work with.
Hiding complexity will also likely hide the capabilities of the underlying abstraction. One of the biggest issues with use of the repository pattern when using an ORM is that it makes it difficult or impossible to use many of the capabilities the ORM provides. ORM abstractions provide powerful capabilities for query composition, mapping, management of lazy loading and many others. Using a repository abstraction generally hides these capabilities from the application resulting in less effective and efficient interactions. They also tend to result in complex repository implementations to handle all the data access scenarios present in an application. This mixes multiple responsibilities inappropriately and introduces the risk of breaking unrelated functionality when making a change.
Although an abstraction can be used to break coupling to an external dependency the question must be asked whether that external dependency needs to be abstracted and whether such an abstraction is feasible. In most cases a system is built around a (hopefully well chosen) set of external dependencies that are relatively stable. Most systems have relatively little need to adopt a new ORM frequently for instance. Furthermore where an external dependency imposes constraints on code that utilises it then it is generally naive to assume that it can be replaced without significant cost. Even frameworks that ostensibly provide the same function will have different approaches and trade-offs that can have significant impacts of the operation of the system. In many cases the replacement of a key framework will not be significantly less costly than redevelopment of the system entirely. As such it doesn’t make sense to add an abstraction that cannot feasibly ever be used.
It is also true that for an abstraction to be genuinely useful in allowing a dependency to be replaced it cannot provide functionality that a desired implementation cannot support. As such the abstraction immediately becomes a lowest common denominator amongst the various implementations available, providing only those capabilities that are provided by everything. This severely diminishes the value of using an external framework because most of its features may be denied to the system.
Adding an abstraction to add additional functionality is not unreasonable but the scope of the addition must be in keeping with the size of the abstraction as a whole. I have seen it proposed to layer an abstraction on top of logging systems not just to remove ties to a specific logging system but also to add a complex event type system on top of it. The problem here is that most applications do not need this complexity and the burden of managing all the possible types of events to be logged is potentially extremely expensive. Most logging systems have capabilities to sort events into namespaces already rendering an additional event system of relatively little value. Additionally any external dependency that also provides logging will not use the custom event framework meaning that the system will need to deal with two forms of logging for little benefit.
The relatively minor benefits of such an event type addition must also be considered relative to the need to abstract the entire logging interface. These interfaces tend to provide a large number of logging variants meaning that either the new abstraction must provide comparable functions for each at a non-trivial development cost or must provide a much more limited logging capability. In most cases the additional functionality gained will not be worth the restrictions imposed.
Adding abstractions is not itself without cost. Small abstractions are relatively low overhead but abstracting a complex external dependency can represent a significant cost. This is not a straight cost to build as the abstraction code will require ongoing maintenance effort (although ideally not to the extent of business logic and presentation code).
In most cases it is preferable that an application embrace the external dependencies that it takes and works with the abstractions that they provide. This is not to say that all elements of the system must work with them directly. However building system infrastructure that builds upon existing abstractions rather than seeking to supplant them is generally less work, more efficient and provides richer capabilities to the code that actually delivers the system’s value to end users.
There are some areas of software development where I hold to particular views on practices and principals. Nothing in this post is new or revolutionary, it is a statement of position not an attempt to be novel or enlightening.
If there’s an attitude in software development worse than “It works on my machine” it’s the belief that it’s sufficient just to get software working. This view holds that features are added as expediently as possible and that modifications that are not directly related to delivering features are wasteful and unnecessary. The important thing is that the software does what is required now.
It is my view that the particular functionality of a system at any point in time is irrelevant. Software has a lifespan over which it needs to be useful. This almost certainly includes fixing defects and adapting it to changing requirements and environments. Software that cannot be effectively modified to meet these needs becomes increasingly irrelevant and unfit for purpose. In order for software to be of use now there must be confidence that work done with it will continue to give value.
Software that is not structured to suit new requirements imposes additional costs in implementing those requirements. Addressing the limitations of software structure is the essence of refactoring. This practice justifies itself by reducing the effort required to add new features or resolve defects and the risk involved in doing so.
As such there is a professional obligation to consider long term obligations in order to ensure that the cost of operating the software over its lifetime is not unreasonable. Software that is adaptable will also have in general a longer lifetime than software which cannot be adjusted and must therefore be replaced when it can no longer effectively perform the function for which it is intended.
Most non-trivial ASP.NET MVC applications have behaviours that apply to most of the controllers. Filters provide a (relatively) clean extension point where these behaviours can be added to ASP.NET MVC so that these behaviours do not need to be replicated in each controller. My preferred mechanism for doing this involves using marker interfaces and filter providers.
There are three elements to this approach:
An interface that signals that a behaviour is required. This may define properties to allow context to be added to the controller.
An action filter that provides the behaviour. It can use the interface to inject context into the controller.
A filter provider. This checks that a controller implements the relevant interface and adds the filter to the ASP.NET MVC pipeline where there is a match.
Generally an interface should be associated with only one filter and filter provider. There are cases where multiple interfaces may handled by a single filter/filter provider. This should be reserved for cases where the interfaces are closely related, preferably through extension.
This example shows an interface IFoo that is used to provide a context IFooContext to controllers. This context is disposable so the action filter will call dispose on it after the controller action has run. This rather trivial example doesn’t do anything a controller couldn’t easily do itself but a more realistic usages would be expected to have rather more involved implementations for the action filter.
Interfaces are limited in that either a class implements them or it does not. This does not leave much room for varying behaviour. What we can do however is take advantage of the ability of .NET classes to implement multiple interfaces. The general approach is to have a primary interface that requests the default behaviour. Variations may then be requested by implementing other interfaces. Some or all of these interfaces may be pure marker interfaces with no properties or methods. The filter or filter provider may then vary their behaviour as appropriate based on which interfaces a controller instance implements.
One example is requiring authentication (in scenarios where you are writing a custom authentication system rather than use the ASP.NET infrastructure). You may have a base interface IRequireAuthentication that restricts the controller to authenticated users only (by having the filter return a 304 result if the request is not authenticated, preventing the controller method executing). However in some scenarios you need to know the member identifier. In others you need to know the roles the member belongs to, and in some cases you need both the member ID and roles.
You can provide two additional interfaces IRequireAMemberId and IRequireMemberRoles. These define properties through which the member ID and roles (respectively) can be injected into the controller. Both of these can extend RequireAuthentication so that implementing one or both automatically makes the controller require authentication. The filter can check for these interfaces and inject the relevant information to the controller.
You can also mix behaviour because interfaces are not restricted to single inheritance. A controller may wish to know a member ID if available but has a default behaviour for unauthenticated requests. You can define ISupportMemberId that defines the member ID property as nullable. The filter provider and filter are varied to populate the member ID via this interface in authenticated requests. If the request is not authenticated the orthogonal IRequireAuthentication interface is used to determine whether a 304 is returned. You can then compose IRequireAMemberId by extending ISupportMemberId and IRequireAuthentication to get the desired behaviour. (This does have the member ID nullable on controllers where it can never be null which may not be desirable. In such cases an alternate composition may be preferred).
Behaviours often have dependencies on infrastructure code and configuration. This can be cleanly supplied by providing these dependencies on construction of the filter provider. This can use them or pass them in turn to filter instances as required. This eliminates the need to utilise global state.
Comparison to Attributes
Filters may also be provided as .NET attributes. This allows behaviours to be applied to specific controller methods. As I generally prefer small controllers that contain only a few related actions (or preferably only one) I do not consider this to be a significant advantage. Use of marker interfaces has a couple of clear advantages over attributes:
Interfaces can define properties (and less often methods) through which the filter can cleanly provide context information. Controller code then has an explicit context it can rely on without having to be concerned with how this context is supplied.
Interfaces can be passed as method parameters (including extension methods) to provide common functions that are applicable to controllers that request a particular behaviour.
Interfaces can be generic type constraints so that utility code can work with the static type system.
Dependencies can be injected into the filter provider when it is instantiated rather than requiring them to be provided via global state.
Comparison to Base Classes
Common functionality may also be provided by a common base class that controllers derive from. The largest problem with this is that there is rarely a good inheritance hierarchy that can provide the range of functions a system needs without duplication. This is a classic composition over inheritance argument. Using inheritance tends to lead to many classes from which controllers can derive. In this case it is often not clear what class a controller should derive from and what specific behaviours they provide. Where there are multiple behaviours the implementation of the base classes also risks being duplicative.
In comparison you can have multiple orthogonal interfaces that specify individual behaviours and use only those that are relevant. Although you must still let people know what interfaces are available they can be named with much less ambiguity than a type that aggregates multiple behaviours.
It is also possible to use mechanisms like extension methods that extend Controller (or a generic type parameter constrained to Controller) to add common methods to your controllers where these methods do not need to maintain state on the controller class.
What would be nicer
Although this is currently not possible in C# using mixins (such as Ruby modules) would make this cleaner and simpler. Also while I’m wishing I’d like an Airbus A380.
One of the rules in Microsoft’s FxCop static code analysis tool suggests that you should avoid empty interfaces.
Interfaces define members that provide a behaviour or usage contract. The functionality that is described by the interface can be adopted by any type, regardless of where the type appears in the inheritance hierarchy. A type implements an interface by providing implementations for the members of the interface. An empty interface does not define any members. Therefore, it does not define a contract that can be implemented. If your design includes empty interfaces that types are expected to implement, you are probably using an interface as a marker or a way to identify a group of types. If this identification will occur at run time, the correct way to accomplish this is to use a custom attribute. Use the presence or absence of the attribute, or the properties of the attribute, to identify the target types. If the identification must occur at compile time, then it is acceptable to use an empty interface.
I’m going to call this rule out as fundamentally wrong. There are many scenarios both at compile time and at runtime where determining that an instance implements a marker interface is of great value. To suggest that using an attribute is the “correct” way to make identifications at runtime is a sweeping statement that cannot be supported. Further there is a reliance here that the exceptions will be considered when evaluating the rule. Unfortunately it is all too common (and I myself have been guilty of this) of taking the tool pronouncement without question and these details will be lost. As a result designs that are entirely valid solutions will be degraded for no good reason.
There seems to be an assumption in the above that the contract of an interface is only that which is explicitly specified in terms of its members. This assumption is clearly false. Almost every possible interface has implicit contracts that are not directly expressed in the code. For instance the IDisposable.Dispose() method has an implicit contract that it will not throw if invoked multiple times (not that Microsoft themselves always follow this contract). This contract is not expressed directly on the interface as C# provides no way to do so. Yet it exists. I see no legitimate reason to deny that the contract of an interface may be entirely implicit provided it is done in a considered fashion.
It is entirely legitimate that at runtime I may wish to test if an instance adheres to a contract implicitly defined by an interface. This could be done with an attribute but dealing with attributes is inherently more complex. Further attributes cannot be considered compile time in constructs such as generics. If I wish to make decisions at both compile time and runtime or even to not unnecessarily close off the possibility a marker interface is the only possible choice. If I chose an attribute and later wish to do compile time checking them I would need to perform a significant refactoring (which may not be possible if my API has already been published).
Attributes of course do have usages that interfaces (marker or otherwise) do not. An interface cannot be used at anything but the type level, attributes may be attached to almost anything. Attributes may also carry usage specific data and provide behaviour. If you need these capabilities then your choice is clear. Personally I find APIs that require these usages to be on the whole difficult and obscure to work with an extend (with some notable exceptions such as parameterised tests in MbUnit). However they are useful (if overused) capabilities that are worth considering.
TL;DR: FxCop is wrong, Marker Interfaces rock. Go forth and disable rule CA1040 immediately.
When discussing Implicit Dependencies one solution commonly adopted is to perform common functions wherever they are required. This is common for such things as establishing database and ORM connections and performing exception logging. The thinking appears to be that if such code is explicitly included then there is no chance that the behaviour will be misunderstood or overlooked (it may also be due to developers insufficiently familiar with the platform to perform these functions in a centralised fashion).
Unfortunately this approach commits the cardinal development sin of duplicating code. Although this locks the code into a particular approach for the duplicated function this is not the biggest issue. The larger problems are in the noise and potential for error this introduces as well as the significant maintenance burden this approach entails.
Intermixing code for low level functions into higher level code does make the behaviour of the low level functions more readily apparent than an implicit dependency. Unfortunately the trade-off is that the higher level code is subject to significant noise that distracts from its core purpose. This makes the code harder to read and increases the number of concepts to be considered when it is modified. Duplicating code also runs the risk of introducing errors whenever the code is replicated, and any error found in duplicated code must be fixed in many places. Such large scale edits are inherently extremely risky.
Rather than introduce the maintainability issues inherent with duplicative code or implicit dependencies the appropriate solution is generally to provide a mechanism through which code can indicate that it requires a particular behaviour without it having to be responsible for the implementation of that behaviour. There are a number of ways this may be accomplished, including:
Mandatory (generally constructor) parameters for types build with a DI container. For instance a type may have a Func parameter on its constructor so it can have a mechanism for obtaining an ORM session injected without having to be responsible for the implementation of that mechanism.
By implementing a marker interface that indicates to infrastructure code that a particular behaviour is required. For example in an ASP.NET MVC application you have have an IRequireFoo interface. You can then implement a FooFilterProvider that checks if a controller implements this interface and if so registers the FooFilter to provide the necessary behaviour. The interface may have no members but can also provide properties through which the filter may inject relevant context into a controller that can then assume that this context is present.
By adding attributes. These can be custom or platform provided. ASP.NET MVC provides such attributes as HttpPostAttribute that the platform uses to restrict how controller actions may be invoked. Attributes are necessary when a particular behaviour is applicable only to a method or where there is instance specific configuration data that cannot be expressed via an interface.
In ASP.NET MVC in particular use of marker interfaces with filters and filter provides is a very powerful way of implementing behaviour in a common fashion whilst making the use of that behaviour explicit.
There are some areas where it is necessary to continue to repeat code. These are where the behaviour of the common behaviours must be influenced by the results of the higher level code. In such cases it is not appropriate to attempt to force a single way of behaving and customisation inside the higher level code is necessary. It is still desirable to provide consistent implementations of the handling of the different possible cases but invocation of these implementations cannot be generalised.
One area where this is true is transaction handling. Higher level code may use a common mechanism to indicate that it requires a transaction (although this should be opt in to support cases where it is not appropriate) control of the transaction is often significantly influenced by the execution of the higher level code. This higher level code may need to perform actions such as starting nested or independent transactions, use multiple transactions, commit on its own schedule or rollback without failing. Handling these scenarios using a single transactional behaviour is likely to be problematic and a more flexible approach is warranted. A more appropriate solution would have the higher level code itself separate out controlling code including transaction control from the code that performs the desired functions.
Error and exception handling is another area where custom code may be required. For serious unrecoverable errors most systems provide an interception point that can be used to record exception details and perform generic handling (such as returning a 403 error code if a security check fails). This generic handling is not always appropriate as some exceptions are recoverable or need to have additional or alternate handling behaviour. In these cases (only) it is appropriate to catch exceptions directly in higher level code. Functions such as logging exceptions should be suitably encapsulated so that where custom handling must do this there is minimal duplication.