To be more specific, I mean that abstractions, while vital to our very craft as programmers, are what makes code hard to read. The means of abstraction are not so important in this regard.
Going with this notion, we can call macros syntactic abstraction. Another method is semantic abstraction, which is introduced by libraries and frameworks. While macros change how a given piece of code is expressed visually, a library changes what incantations, expressed using well-known syntax, are necessary.
At the risk of detracting from the general point I'm trying to make, I'll give an example of what I mean. Let's say we're designing a url library in Haskell. It will obviously have a Url type and the user will need some method of constructing values of said type. Being good Haskell programmers, we want as much type safety as possible, so the url should be encoded with a proper structure. A traditional library approach might be something like:
mkUrl Http (subDomain "example" (tld "com")) p where p = path ["some", "file", "path", someVariable]Using some syntactic abstraction, we could define some infix operators. Thus:
mkUrl Http ("example" <.> tld "com") p where p = "some" ./. "file" ./. "path" ./. someVariableFinally, we could go all out using template haskell with quasi-quotation:
[url| http://example.com/some/file/path/${someVariable} |](Of course, we could also have used a raw string, but that only gives less type safety for no extra flexibility. Yes, the library should probably have a parse function, but it should be used for parsing and not, in general, for constructing urls.)
Now, some might say that the last example is hard to read. What is the "://" syntax about? Well, I expect most of you know what it means, but let's pretend for a moment that you don't. Facing this abstraction, you will have to either read the documentation, which better be good and thorough, or play with the code to infer its meaning. The infix example isn't better, however. Who knows what those operators do? I also claim that the first example isn't really clearer. What is tld? How about a domain? Why is the url built up of exactly those three components?
The point is that the major challenge here is understanding the notion of a url and how the library maps those concepts to Haskell. Whether we need to learn the name of a bunch of functions, the names and fixities of some operators or a domain-specific syntax is not all that important.
If I were to design a url library I would probably provide constructors as in the first example for programmatically building urls, a parser that attempts to turn a string into an url and a quasi-quoter like in the last example for constructing (and statically checking!) a url where part of it is known at compile-time. The focus would be on the documentation that explains the url notion and how it is encoded in the library. The challenge for the user is in understanding the abstraction; once that is overcome the medium of expression is not a significant obstacle. You should therefore be free to choose syntactic abstractions where it fits the domain better than a more traditional library approach.