Question:Show how to implement the stingy algorithm for Horn formula satisfiability (Section 5.3) in time that is linear in the length of the formula (the number of occurrences of literals in it). (Hint: Use a directed graph, with one node per variable, to represent the implications.)

Short Answer

Expert verified

Answer:This challenge of determining whether such a combination of conceptual Horn sentences seems to be satisfiable or not is known as Horn-satisfiability. The indicated literals are the simplest representation of something like the Horn formula.

Step by step solution

01

Build direct bipartiate(graph):

The entire process runs in linear time since each literal is only marked true once (with respect to the number of literals, because of the graph building).

Build a directed(bipartiate) graph, connecting the Horn clauses to their respective positive literal(if they have one), and connecting the literals to all chauses where they appear negated.

02

 Calculation of Propagate for programme functionality:

[UPDATE] Do the same for each clause c that does not contain a negating literal. {

assuming c doesn't have a significant literal STOPunsatisfactory,

disseminate the otherwise (c)

}

eventually, this unit programme functionality

propagate(c) :-

suppose c(positive )'s literal hasn't been certified true yet {

mark p true

considering all(p,c') in the graph's Edge set {

remove p from c'

Disseminate if c' contains nothing more negated literals (c')
}

}

Percussion seems to be the task of deciding whether a given mixture of theoretical Horn sentences appears to really be satisfiable or not. The literals shown are the most basic representations of things like with the Horn formula.

Unlock Step-by-Step Solutions & Ace Your Exams!

  • Full Textbook Solutions

    Get detailed explanations and key concepts

  • Unlimited Al creation

    Al flashcards, explanations, exams and more...

  • Ads-free access

    To over 500 millions flashcards

  • Money-back guarantee

    We refund you if you fail your exam.

Over 30 million students worldwide already upgrade their learning with Vaia!

One App. One Place for Learning.

All the tools & learning materials you need for study success - in one app.

Get started for free

Most popular questions from this chapter

The following statements may or may not be correct, In each case, either prove it (if it is correct) or give a counter-example (if it isn’t correct). Always assume that the graph G=(V,E)is undirected. Do not assume that edge weights are distinct unless this is specifically stated.

  1. If a graph G has more than |V|-1edges, and there is a unique heaviest edge, then this edge cannot be part of a minimum spanning tree.
  2. If G has a cycle with a unique heaviest edge e, then e cannot be part of any MST.
  3. Let e be any edge of minimum weight in G. Then e must be part of some MST.
  4. If the lightest edge in a graph is unique, then it must be part of every MST.
  5. If e is part of some MST of G, then it must be a lightest edge across some cut of .
  6. If G has a cycle with a unique lightest edge e must be part of every MST.
  7. The shortest-path tree computed by Dijkstra’s algorithm is necessarily an MST.
  8. The shortest path between two nodes is necessarily part of some MST.
  9. Prim’s algorithm works correctly when there are negative edges.
  10. (For any r>0, define an r-path to be a path whose edges all have weight <r). If G contains an r-path from node s to t , then every MST of G must also contain an r-path from node s to node t.

The following table gives the frequencies of the letters of the English language (including the blank for separating words) in a particular corpus.

blank

18.3%

r

4.8%

y

1.6%

e

10.2%

d

3.5%

p

1.6%

t

7.7%

l

3.4%

b

1.3%

a

6.8%

c

2.6%

v

0.9%

o

5.9%

u

2.4%

k

0.6%

i

5.8%

m

2.1%

j

0.2%

n

5.5%

w

1.9%

x

0.2%

s

5.1%

f

1.8%

q

0.1%

h

4.9%

g

1.7%

z

0.1%

  1. What is the optimum Huffman encoding of this alphabet?
  2. What is the expected number of bits per letter?
  3. Suppose now that we calculate the entropy of these frequencies

H=t=026ptlog1pt

(see the box in page 143). Would you expect it to be larger or smaller than your answer above? Explain.

d. Do you think that this is the limit of how much English text can be compressed? What features of the English language, besides letters and their frequencies, should a better compression scheme take into account?

Let T be an MST of graph G. Given a connected subgraph H of G, show that TH is contained in some MST of H

Under a Huffman encoding of symbols with frequenciesf1,f2,.....,fn , what is the longest a codeword could possibly be? Give an example set of frequencies that would produce this case.

Sometimes we want light spanning trees with certain special properties. Here’s an example.

Input: Undirected graph G=(V,E) ; edge weights we; subset of vertices UV

Output: The lightest spanning tree in which the nodes of U are leaves (there might be other leaves in this tree as well).

(The answer isn’t necessarily a minimum spanning tree.)

Give an algorithm for this problem which runs in O(ElogV) time. (Hint: When you remove nodes Ufrom the optimal solution, what is left?)

See all solutions

Recommended explanations on Computer Science Textbooks

View all explanations

What do you think about this solution?

We value your feedback to improve our textbook solutions.

Study anywhere. Anytime. Across all devices.

Sign-up for free