This is the mail archive of the libc-alpha@sourceware.org mailing list for the glibc project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: Building consensus over DNSSEC enhancements to glibc.


On 11/19/2015 08:10 AM, Petr Spacek wrote:
> On 19.11.2015 04:58, Zack Weinberg wrote:
>> On 11/18/2015 10:40 AM, Petr Spacek wrote:
>> This does not have to be as difficult as you are making it.
>>
>> Unsigned zones are allowed for compatibility only.  New record types do
>> not have to work in unsigned zones.  In fact, new record types SHOULD
>> NOT[rfc2119] work in unsigned zones, because if they only work in signed
>> zones the security considerations become simpler.
> 
> Please let me explain why the assumption 'new record types SHOULD NOT[rfc2119]
> work in unsigned zones' is not feasible.

I'm not convinced.

> I will use couple (counter)examples:
> Speaking purely about record types, are you implying that e.g. EIU48 RR type
> needs to be signed? 

Yes.

> Why is that? EUI48 was standardized in 2013, RFC 7043,
> well past DNSSEC RFCs, so time is not the good indicator here.

Because there is no need for it to work in unsigned zones.

More concretely, a record type should be accepted in an unsigned zone if
and only if it is *necessary* for interoperability, i.e. there are a
large number of existing unsigned zones, on the public Internet,
containing that record type.

EUI48 in particular appears to be a special-purpose record used in a
small number of zones, by entities that control all involved software.
Those zones can and should be signed.

> Similarly, we would have to consider that there are RR type ranges defined for
> private use. That opens a Pandora Box.

Sign the zone.

> Also, we would have to consider private deployments/DNS in private network
> which are not signed on purpose. E.g. because there is some black DNS magic
> which auto-generates responses. Or simply because DNSSEC is overkill in
> particular scenario.

Sign the zone.

Look, the argument you're making is isomorphic to the argument against
"all new Web platform features should be HTTPS only".  It's too
inconvenient, or data integrity is claimed not to be necessary in a
particular case.  The same counterargument applies: establishing the
principle that _all_ new Web (or DNS) features are secure-only is more
valuable than whatever benefit might accrue from allowing a particular
feature not to be secured.  For instance, protocol designers no longer
have to work out whether each new RRtype can be safely used in an
unsigned zone, because it'll never happen.

> Limiting RR types on DNS library level has the fundamental problem that there
> is simply not enough information to decide what can go though even without
> signatures and what has to be stopped.

No, I don't accept this claim either.  I insist that we can know the
full set of RR types that have to be accepted in unsigned zones for
interoperability's sake, and that there is no other valid excuse for
accepting a record in an unsigned zone.

(Ok, one more: diagnostic utilities.  Diagnostic utilities are not going
to be calling getaddrinfo().)

> Only consumers of the DNS data know for what purpose the DNS data will be used
> and thus only the consumers of the DNS data know what level of trust is
> required. This might even depend on configuration of the consumer.

Again the track record of the Web is instructive: consumers _don't_ know
this, and if they try to work it out it anyway they get it wrong.

> a) Imagine a standard SMTP server configured to do best-effort delivery with
> opportunistic/unathenticated encryption.
> 
> It does MX record lookup to determine host names of SMTP servers for domain
> example.com. This lookup does not need to be DNSSEC-secured because channel
> security is opportunistic/unathenticated anyway.
> 
> b) Imagine a SMTP server configured to do RFC 7672-authenticated mail transfer
> for particular domain and avoid falling back to cleartext.
> 
> In this case MX record lookup MUST be DNSSEC-secured. (RFC 7672 section 2.2.1.)

This is a case for being able to ask for stricter enforcement for a
particular lookup.  Giving applications that capability is much less
dangerous than giving them the ability to relax enforcement; although
I'm not going to swear there's no denial-of-service possibility hiding
in there.

> c) Imagine a NTP (or telnet, or ...) client doing SRV record lookup with
> intent to discover NTP servers. NTP protocol itself can be/is unprotected so
> there is no point in requiring DNSSEC validation for this lookup because
> attacker can MitM DNS as well as NTP.

Insecurity of the application protocol is not a valid excuse for not
protecting the DNS lookup.  NTP has to be fixed _also_.

(SRV, though, might have to be whitelisted too.)

zw


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]