“The specified address was excluded from the index”

Hello,

an issue that occurred recently was that a content source within our SSP for search (MOSS 2007) did not include any items. The crawl log of the SharePoint’s Central Administration stated the following:

The specified address was excluded from the index. The crawl rules may have to be modified to include this address. (The item was deleted because it was either not found or the crawler was denied access to it.)

Interestingly, some of the content sources we already had before were crawled without any obstacles, thus the (mis)configuration of the problematic application seemed suspicious. After checking the permissions of service accounts involved in the crawling process (not the cause), and after comparing the settings between the apps (not the cause as well) – the problem was in the crawl rules set up for this content source. The option for crawling complex URLs hasn’t been activated for the subdomain URL we wanted to crawl. Enabling the “Crawl complex URLs (URLs that contain a question mark (?))” option under Shared Services Administration: SSP > Search Administration > Crawl rules > Add or Edit Crawl Rule and starting the full crawl from the beginning solves the problem.

But still the question was, why the non-complex, normal URLs could not be crawled by the service. The cause was in our IIS configuration, which is globally set up to automatically detect cookie mode for session state. This results in appending a query string parameter to the URL at first request. So that the URL looks similar to this: http://www.ourdomain.com/index.html?AspxAutoDetectCookieSupport=1 .

Now it seems pretty clear why the crawler without the rule mentioned before had problems. It failed at the first request to the root URL, since the rule has not been met. Hence, it could not continue crawling and left the index empty with the error/warning message.

Hope this helps,
Łukasz

„Membership credential verification failed”

Hello,

there is quite a lot of writing across the web on this ASP.net issue:

Membership credential verification failed.
(…)
Name to authenticate: userXYZ

When you have role, memberhip, or other providers configured in your ASP.net application, you have to remember to set the ApplicationName property in each of the provider entries in your web.config file. That’s clear. But what if the message still comes up?

In my case I was using forms authentication against Active Directory with a custom role provider. The odd thing was that only a few of the users could not perform a correct login. Others were signing in just ok. So, after checking the AD properties of the user objects (there was a suggestion that it may be caused by similar objects in different OU’s), then analyzing the role logic of the custom provider, nothing seemed to be causing the trouble.

The actual reason was very simple. The users which had problems signing in had cookies disabled in their browsers, hence the forms authentication could not store the session data on client side. So, one should have in mind, that the windows server log entries with the „Membership credential verification failed” information message could also be caused by such a trivial thing as unavailable cookies in user’s browsers.

Hope this helps,
Łukasz

SharePoint: access denied when trying to copy a list (item)

Hey there,

Lately, while trying to copy a SharePoint list from one site to another (or later also single list items), I got this infamous “Access denied” SharePoint error. At first of course the idea is to log in as a super-user. But when this operation failed also with the account of Site Collection Admin and/or Site Owner role, it seemed less trivial than just a missing permission within the site collection.

Unfortunately, a quick jump into the SharePoint logs didn’t bring me much further:

Access is denied. (Exception from HRESULT: 0x80070005 (E_ACCESSDENIED))

Since this problem has occurred in more than one application on our SharePoint server, I was assuming it to be a global misconfiguration. Hence, had to check out the Central Administration. There was the solution:

In Central Administration > Operations > Service Accounts I checked which account actually is responsible for the communication with Windows SharePoint Services on our server. So, in the Web application pool section, I selected the WSS Web application and the application pool of the application which was giving me this “Access denied” message.

The account was the predefined one – the Network Service.

There’s the rub! Since we’re using own domain accounts for such cases and only they’re enabled to access the WSS, the Network Service account was actually getting the “Access denied” message (when trying to connect to one of the SharePoint Web Services).

Changing the account from predefined one to the configurable one with our username and password did the trick. I just had to do an iisreset after this change.

Probably this solution also fixes some other problems we might have encountered, where the communication between application and WSS would fail.

Hope this helps,
Łukasz

„The site collection could not be restored”

Hi there,

Recently while performing standard site collection backup from production environment and restoring it onto test environment (MOSS 2007), we ran across a following problem. The stsadm restore command failed after several minutes with the message:

„The site collection could not be restored. Please make sure the content databases are available and have sufficient free space”

Ok, so the first suspect has to be disk space. According to the MS knowledgebase article, one should ensure that the DB has not any maximum size set and that the drive where WSS works has enough space. After freeing up some additional space both on WSS drive as well as on the DB drive, the amount of free space in each location was almost 3 times bigger than the size of the backup file we wanted to restore. But the error message appeared again.

Further attempts based on some other blog posts (like restarting the sharepoint timer service or doing an iisreset) did not work in our case. Comparing the exact versions of WSS and system patches of both environments brought the same results on both sides, so incompatibility was not the issue.

What actually did work in the end was removing content database and adding a new one in Central Administration:

1. Application Management > Content databases > click the existing one, then check the option „remove content database” and confirm.
2. Add a new content database > settings can stay as they were with the old one, just choose different name from the previous one.
3. Run once again stsadm -o restore…. operation successful!

Interestingly, we didn’t physically delete the old DB’s data files until the restore succeeded. Thus, while executing the command, the system had even less space than before, but made it anyway.

Hope this helps,
Łukasz

„Error in materialized view refresh path”

Hi there,

here’s just a small tip for those of you who are using Oracle’s materialized views with the refresh-on-commit option. Consider a following scenario:

  1. Create a table (let’s call it „X”) with a not-null column
  2. Create a materialized view („Y”) that has to refresh itself each time a record into the table „X” is inserted (or modified, deleted.. REFRESH FORCE ON COMMIT)
  3. Change the column created in point 1 to allow null values.
  4. Insert data into table „X”, with a null value of the column mentioned in point 1.

Most likely you’ll get an Oracle error, stating the following:

ORA-12008: error in materialized view refresh path

Further exception may also look similar to this:

ORA-01400: cannot insert NULL into …

The second one speaks for itself. Since the materialized view is actually a snapshot, it has „remembered” the not-null setting of the column of the table „X”. And when inserting a new record, it complains about the not-null constraint.

Drop the materialized view, re-create it, problem solved. The new one has the new column definition, allowing nulls in the column.

Hope this helps,
Łukasz

Exposing SharePoint calendars for iCalendar clients

Hello there,

since Sharepoint offers out-of-the-box calendar and scheduling capabilities, it is a nice option for teams in terms of improved collaboration and sharing common agendas. By default, users can access the MOSS calendars using a browser or by syncing it with their Microsoft Outlook clients. Additionally, a RSS subscription is also available.
So far, so good, but if we need to subscribe to the calendar using a Mac or an iPhone, iPad, or any other client supporting the iCalendar specification, we have to use a custom solution.

Simply put, a file with the content type text/calendar needs to be generated, and it has to more or less comply with the RFC 2445 specification. Of course you may want to implement it your way from scratch, but there’s a nice iCal Exporter kit from CodePlex. It has been developed as a MOSS feature, but you can easily adapt it to serve users in a different way, for example as a custom http handler. So one can create a class inheriting the IHttpHandler interface, deploy the class library onto the Sharepoint application, and register the handler in web.config:

<add verb="GET,POST" path="*/ical.ics" type="MyLibrary.MyIcalHandler" />

Then we could access the generated file for example via http://myhost/mysite/lists/calendar/ical.ics , and subscribe to the calendar in a corresponding client software. In the handler itself, based on the request URL, we can fetch the corresponding Sharepoint list, iterate through its items and generate the proper entries, then flush it to the browser. The iCal Exporter kit also deals with recurring calendar entries, deleted event series’ occurences, all-day events, so that almost every case is covered.

One thing worth mentioning: when generating an event entry, be careful using the DTSTAMP property. In order for iCal clients to properly recognize changes made to calendar entries, one has to assign the last-modified-date property of the corresponding SPListItem as DTSTAMP value:

 foreach (SPListItem item in calendarList.Items)
 {
    // write vevent start...
    DateTime modified = Convert.ToDateTime(item["Modified"]);
    String dtstamp = "DTSTAMP:" + modified.ToString("yyyyMMddTHHmmssZ");
    // write dtstamp, other properties, vevent end
 }

Otherwise you may encounter a problem that the client won’t fetch a calendar entry’s changes.

Hope this helps,
Łukasz

Finding Active Directory user’s group membership in C#

Hi there,

there are a couple of ways to programmatically find a user and groups he belongs to in Active Directory. Recently I tested a few of them and here are some thoughts of what I found out.

DirectorySearcher

The System.DirectoryServices namespace provides us with a DirectorySearcher class. Filter property of that class can be used in order to specify the search query on the entire directory. An example filter for a user with login name ‘lkarolak’ could look like this:

(&(objectClass=user)(SAMAccountName=lkarolak))

If search is successful, the FindOne() method of the DirectorySearcher class should return an object of type DirectoryEntry. Finding this object’s membership requires iterating through its properties, finding the ones with name ‘memberOf’, and then (if needed) also performing some recursion in order to find out the nested group membership. After all, a bit complicated and quite resource-costly.

SearchRequest

Similar approach could be to use the SearchRequest and SearchResponse objects (this time from System.DirectoryServices.Protocols namespace), which are executed within a LDAP connection. The filter for the query looks just as in the previous example. Also in this case one has to recursively search within the result class’ (SearchResultEntry) attributes in order to get all the user’s nested groups.

Here’s a small example:

// establish a connection to LDAP
LdapDirectoryIdentifier id = new LdapDirectoryIdentifier(domain, port);
LdapConnection _connection = new LdapConnection(id);
_connection.SessionOptions.SecureSocketLayer = secureConnection;
_connection.AuthType = AuthType.Basic;
_connection.Credential = new NetworkCredential(ldapUser, ldapPassword);
_connection.Bind();

// distinguished name of the object 
// at which to start the search.
String _target = "dc=EXAMPLE,dc=COM";

String filter = "(&(objectCategory=person)(SAMAccountName=lkarolak))";
String[] attributesToReturn = { "SAMAccountName", "memberOf", "cn" };

SearchRequest searchRequest = new SearchRequest(_target, filter,
      SearchScope.Subtree, attributesToReturn);
SearchResponse response =
      (SearchResponse)_connection.SendRequest(searchRequest);

if (response.Entries.Count == 1)
{
  SearchResultEntry entry = response.Entries[0];
  for (int index = 0; index < entry.Attributes["memberOf"].Count; index++)
  {
      // get the group name, for example:
      String groupName = entry.Attributes["memberOf"][index].ToString();
  }
}

GroupPrincipal

The most interesting and straightforward solution to me was, however, another approach. Within the namespace System.DirectoryServices.AccountManagement we can find an easy way to find a user in AD and check his group memberships. Without having to recursively loop over the parent groups, we’re able to fetch the groups of the user, an much more data. The only constraint is that the code has to be run on a machine that is located within the domain. Let’s have a look at this sample code:

// "company" is the domain we would like to search in
PrincipalContext pc = new PrincipalContext(ContextType.Domain, "COMPANY");

// get the user of that domain by his username, within the context
UserPrincipal up = UserPrincipal.FindByIdentity(pc, username);

// fetch the group list
PrincipalSearchResult groups = up.GetAuthorizationGroups();
GroupPrincipal[] filteredGroups = (from p in groups
           where p.ContextType == ContextType.Domain
           && p.Guid != null
           && p is GroupPrincipal
           && ((GroupPrincipal)p).GroupScope == GroupScope.Universal
           select p as GroupPrincipal).ToArray();

The last lines actually do the trick. The GetAuthorizationGroups() method would fetch all the security groups of the user. If we would also like to have the distribution groups of the user, we’d have to use the GetGroups() method instead. Of course one could want to filter out some groups, like „Everyone” etc., maybe with help of a LINQ query like here, or in another way.
Anyway, the GroupPrincipal object returned contains the properties we need in order to get the name of the groups of a user (first of all, the DistinguishedName property).

After some unit tests done, it seems also that this last method is the fastest approach of those three mentioned here, probably also due to lack of recursive functions.

Hope this helps,
Łukasz

Verbs configuration for custom HTTP handlers

Hello,

if your ASP.NET or Sharepoint web application is storing binary data (images, PDF’s) in a database, you’ll probably need a HTTP handler which would retrieve those files from DB, and then transfer them to the user’s browsers as well as set the correct content type of the response. A clever way to do that is using the IHttpHandler interface, but that’s not the point of this entry.

Assuming that we have our handler ready and working, for example, reading jpegs from the DB and presenting them to the public as if it were normal files on the server (www.example.com/name1.jpg). In order for our Sharepoint Web Application to map such request onto a correct handler, we need a web.config entry within the <httpHandlers> section, like this:

<add verb="GET" path="/*.jpg" validate="false" 
type="MyAssembly.MyHandler, MyAssembly, 
Version=1.0.0.0, Culture=neutral, PublicKeyToken=abcdefghi"
/>

The verb attribute defines allowed HTTP request methods for this handler. In the case above, a HTTP POST request would be rejected by the server with a HTTP 404 code. For “normal” purposes, such as displaying the images in browsers, the GET method would be enough.

However, once you know that other clients than web browsers would access the handler, you must know which HTTP methods they’re using. Recently, thanks to Fiddler, I was able to discover that one of the client-apps used in our organization is using HEAD method (very similar to GET) in order to retrieve a file. Since the method was not enabled in our configuration, the client could not download the desired file.

Enabling the HEAD method within the handler’s verbs in web.config solves the issue:

<add verb="GET,HEAD" path="/*.jpg" (...) />

Hope this helps,
Łukasz

Request validation in ASP.Net 4.0

Hello again,

recently, while migrating one of our web applications to .NET Framework 4.0, we came across a following issue. The app’s main goal is to store some HTML articles in the database. However, when a user made a postback when saving an article, he got the following error message:

A potentially dangerous Request.Form value was detected from the client (ctl1…)

This kind of message is known to ASP.net 3.5 and 2.0 developers. The standard workaround for this issue was either to configure a single page not to validate request:

Page validateRequest=”false” 

..or, globally for the whole application, via web.config:

<pages validateRequest="false" />

We had the second option chosen, since only a couple of trusted and authenticated users were using the app. But, as already said, the error came up after migration to 4.0 Framework. The request validation was disabled in the <pages> node, but for ASP.Net 4.0 it seems not to be enough.

There are some security improvements in the latest version of ASP.Net, protecting from cross-site-scripting (XSS) attacks. New default protection not only applies to aspx pages, but to all kind of requests, like web service calls and custom handlers, even when using our custom HTTP module(s).

That was the reason why our app threw errors which it didn’t do before. In order to restore previous behavior of ASP.Net applications one has to set the request validation mode backwards, for the 2.0 version. In web.config, you just add following attribute to the <httpRuntime> node:

<httpRuntime requestValidationMode="2.0" />

Hope this helps.

Lukasz

IE’s attachment handling on Tomcat via https

Howdy,

today something from a bit different discipline. Recently some users of one of java applications running in the company reported quite odd behavior. While they were trying to download any attachment / binary file generated by the application, they got the following error message as popup in Internet Explorer (versions 7 and 8):

„IE cannot download…………IE not able to open site..requested site is unavailable or cannot be found…”

After the message, the download was suspended and they couldn’t get the attachments. In all other browsers (FF, Opera, Chrome and Safari tested) the problem was not occuring. The application is running over https on Apache Tomcat 6.

First suspect was of course a bug in the application. After some research it turned out not to be the case. It seems to be a bug in Internet Explorer itself. Non-cache headers seem to prevent the browser from downloading attachments when working over a secure channel. Microsoft released a hotfix for it, but apparently only for the IE 6.

In order to workaround the issue, one can force Tomcat not to send the non-cache headers on response. The way to do that is to put the following valve in the Context.xml configuration file of the server (between <context> tags):

<Valve className="org.apache.catalina.authenticator.NonLoginAuthenticator"
disableProxyCaching="false" />

After that (server restart required), it seems to work fine, with no side effects.

Hope this helps,
Lukasz