NHibernate 3.2: paging broken on Oracle databases

Howdy,

after upgrading to Nhibernate 3.2.0, the pagination methods of existing projects depending on NH with Oracle databases do not work correctly anymore – they return either zero records, or an incorrect (unexpected) number of them. The problem occurs when using SetFirstResult(int) and SetMaxResults(int) methods of the ICriteria object. Apparently this is caused by a bug in the version 3.2.0. Updating to the currently latest version (3.3.0) resolves the issue.

Further information can be found here.

Hope this helps,
Lukasz

Insufficient rights when deploying a SP2010 project from Visual Studio

Hi there,

this time just a quick tip for some of you getting an error when deploying a SharePoint 2010 project from Visual Studio:

Error occurred in deployment step ‚Recycle IIS Application Pool’: Cannot connect to the SharePoint site. If you moved this project to a new computer or if the URL of the SharePoint site has changed since you created the project, update the Site URL property of the project.

In my case, the Site URL property of the project was empty but still the need was to deploy the project globally into the environment.

Furthermore, this message can be somewhat misleading, as it may occur when the user account with which you are deploying the  project with, has insufficient rights to do so (e.g. is not the member of the farm’s administrators group).

Hope this helps,
Lukasz

Sys.WebForms.PageRequestManagerParserErrorException in an intranet-zone web application (IE)

Hi there,

this time I’d like to describe a pretty odd issue encountered a while ago.

We have a following scenario: We have an ASP.NET 4 web application, configured in IIS 7 only for basic authentication. The application uses quite a lot AJAX update panels and other javascripts. The problem that started occuring was the infamous Sys.WebForms.PageRequestManagerParserErrorException:

—————————
Microsoft Internet Explorer
—————————
Sys.WebForms.PageRequestManagerParserErrorException: The message received from the server could not be parsed. Common causes for this error are when the response is modified by calls to Response.Write(), response filters, HttpModules, or server trace is enabled.

Details: Error parsing near ScriptResource.axd ….

It happened only using IE (v.7,8,9), but since IE has  always been a bit „different”, that was not the funniest fact about the problem. What was in fact strange, was that it happened only on some machines – I’ll come back to that later in this post.

However, all the hints pointing how to avoid that error failed. In fact, the application did not have any calls to Response.Write, Server.Transfer, neither http modules nor response filters. Trace was also disabled.

Not the first time, Fiddler came to rescue. I was able to notice that for each needed request for an aspx page, actually two requests were made. The first one was giving a HTTP status code 401 (access denied), then the second one was successful (HTTP 200). Analyzing the request headers the only difference between them was the authorization method. The request which gave 401 response had:

Authorization: NTLM DFGHJKLDRFGHNXAAAAA==

The second one had :

Authorization: Basic ZXVyasdasdasdasdasdasd=

Since the update panel and other asynchronous requests were getting a 401 response, the application was giving the error mentioned above.

Now, the question was, why would the NTLM authorization be enabled, if I had set explicitly in IIS the windows authentication method to be disabled? As mentioned earlier, only basic authentication was enabled. Since this app was also a sub-application of another one (which had windows authentication enabled), first thought was an inheritance problem of IIS settings. But the app having problems was overriding the parent setting (win auth disabled), so from that point of view it was OK.

As mentioned already earlier, the issue was occuring on some machines only. And there was the rub. The machines were the corporate ones of our company, and the website was added as an intranet site in Internet options. Additionally, the automatic logon for intranet zone was enabled:

automatic logon only in intranet zone

This setting is forcing Internet Explorer first to try to authorize the user logged onto the machine in the domain. That failed, since NTLM/Windows auth was disabled in IIS. First then the ‚correct’ basic authentication request was made.

As a solution you may either remove the application from intranet sites (if your/corporate policy allows it), or you may enable Windows authentication in the application instead of the basic one.

Hope it helps,
Łukasz

MOSS Search web service – impersonation problems when calling from an external application

Hi there,

A while ago I was implementing a search functionality within an ASP.NET application. The plan was to use the SharePoint Search for crawling and indexing the contents, and afterwards, from my application, connect to the MOSS web service and perform the needed query upon it.

I had some web sites and BDC applications prepared within a scope; everything crawled and indexed – so far so good.

In the ASP.NET application, the service reference to the exposed asmx has been added (e.g. http://sharepoint/_vti_bin/search.asmx ).
Then I tried to invoke the service sending a query packet xml object:
[csharp]QueryServiceSoapClient client = new QueryServiceSoapClient();

client.Query ("<QueryPacket>….");[/csharp]
At this stage, the following exception occurred:

Error:
Retrieving the COM class factory for component with {CLSID BDEADEE2-C265-11D0-BCED-00A0C90AB50F} failed due to the following error: 80070542

It doesn’t say much, does it? After some reading, it came out that the app could not authenticate itself against the MOSS webservice in a correct manner – the credentials for the webservice weren’t passed as one would expect.
Of course, an explicit assignment of username and password was not the way I wanted to go. I needed the application pool account (which had all needed permissions on the webervice) to be used and impersonated on every call to the asmx.

Changing the client’s impersonation level to ‚Delegation’ solved the issue:
[csharp]client.ClientCredentials.Windows.AllowedImpersonationLevel = System.Security.Principal.TokenImpersonationLevel.Delegation;[/csharp]
Exception gone, search results present.
Hope this helps,
Łukasz

SharePoint: „Loading this assembly would produce a different grant set from other instances” after a security patch from MS

Hello,

after installing the critical patches for .NET framework, as described in the MS Security Bulletin MS11-100, some of our MOSS 2007 applications were hitting the following exception:

FileLoadException : Loading this assembly would produce a different grant set from other instances. (Exception from HRESULT: 0x80131401

It seems that the security fix applied in the patch KB2657424 (maybe also KB2656351, although it’s a different version of .NET framework) has caused this temporary problem. The solution is to recycle the affected applications’ IIS pool – the exceptions don’t occur anymore and applications work correctly again.
The solution with restarting application pools appears to be quite strange, since the whole machine had to be restarted upon installation of those security updates. But still, it worked.

Hope this helps,
Łukasz

ASP.NET membership provider – identifying users in a multi-domain Active Directory

Hello there,

in a case when using ASP.NET membership provider against Active Directory (System.Web.Security.ActiveDirectoryMembershipProvider), and when there is more than one domain within the directory, one may encounter a trouble distinctly identifying users. E.g. DOMAIN1userXY is a different user than DOMAIN2userXY. Thus, the users must not be confused and should be treated with caution.

In the web.config entry for the membership provider we can specify which field of the AD object should be checked in order to precisely find the user we mean. The attribute „attributeMapUsername” has two possible values:

  1. sAMAccountName
  2. userPrincipalName

As of the first case, we have only the username – without domain – hence we cannot identify precisely which user is referenced. The second option gives us exactly what we need: the username with domain is used here (UserName@DomainName), thus we have the distinct identification of the users.

If you’re using the membership provider’s methods in codebehind, in this blog post you’ll find the explanation on how to fetch the needed properties of an Active Directory user needed as provider’s method parameters. They are stored in the UserPrincipal object.

Hope this helps,
Łukasz

SharePoint: backup failed – the current operation timed-out after 3600 seconds

Hi,

A short though maybe a helpful one:

Symptoms:
1. MOSS 2007 central administration states: backup failed. One or more databases weren’t properly backed up.
2. Backup logs contain following message:

Error: Object Shared Search Index failed in event OnPrepareBackup. For more information, see the error log located in the backup directory.
WebException: The current operation timed-out after 3600 seconds

3. Similar message (timeout) regarding the SSP’s database.
4. SSP administration page indicates one or more apparent  endless crawls running, on content sources which are rather small.

Resolution:
1. Restart the Office SharePoint Search service.
2. Clear search index – reset crawled content in SSP’s search administration.
3. Start full crawls on your content sources.

Best,
Łukasz

Posted from WordPress for Android

ASP.NET: asynchronous calls to session-enabled web service methods

Howdy,

This time a small hint for those of you that are using jQuery and/or AJAX methods to connect with an ASP.Net web service, and the web service is using Session variables (web methods with [EnableSession=true] attribute).

I have been using this approach for some time, since I needed to persist some data for users.

However, when it came to performance tests, it occurred that the AJAX calls weren’t really asynchronous. The tests revealed that each next call waits for the previous one to finish.

The reason is quite simple and is one of ASP.net’s limitations: the first request gains exclusive access to the session and its variables, and thus prevents execution of next request until the current one completes. My approach was wrong and it took a while to discover the cause. Maybe if I had read the last paragraph of this article  first, it would have been easier 😉

So, for having better performance of concurrent request in a similar architecture, one would have to either use other ASP.net methods for persisting state (like Cache object), or write a custom solution.

Hope this helps,

Łukasz

Posted from WordPress for Android

SharePoint: deleting a SSP leaves a running SQL agent job

Hello,

On one of MS SQL Server database backend machines, I was getting a lot of windows event log entries stating that there was a problem for our SQL Server account with accessing one of the databases:

Login failed for user ‚DOMAINsqluser’. Reason: Failed to open the explicitly specified database. [CLIENT: x.x.x.x]

Investigating the corresponding SQL Server instance logs, further details of the issue followed:

[298] SQLServer Error: 18456, Login failed for user ‚DOMAINsqluser’. [SQLSTATE 28000]

[298] SQLServer Error: 4060, Cannot open database "SSP_XYZ" requested by the login. The login failed. [SQLSTATE 42000]

First idea was of course checking the permissions of the user within that database, but then there came the weird thing – a database with such name did not exist. Another approach was that maybe some old web application has been forgotten and still uses explicitly the DB name (e.g. in web.config). Not the cause either.

Finally, since the database name contained ‘SSP’, it had most probably something to do with a Shared Services Provider database. The current one we have has a different name, so the name occurring in the error logs referred to a non-existing SSP. We were able to find out that such SSP has been created and deleted a while ago. The corresponding database has been also removed from the SQL-Server, but one oddment remained there: a SQL Agent job for deleting expired sessions. The agent tried to connect to that DB every minute, and encountered the error mentioned above.

You can find the jobs either directly in the table ‘msdb.dbo.sysjobs’, or within the object explorer, under the “SQL Server Agent” node:

sql server agent jobs

Deleting or disabling the job responsible for connecting to the non-existing SSP’s database solves the problem.

Hope this helps,
Łukasz

“The specified address was excluded from the index”

Hello,

an issue that occurred recently was that a content source within our SSP for search (MOSS 2007) did not include any items. The crawl log of the SharePoint’s Central Administration stated the following:

The specified address was excluded from the index. The crawl rules may have to be modified to include this address. (The item was deleted because it was either not found or the crawler was denied access to it.)

Interestingly, some of the content sources we already had before were crawled without any obstacles, thus the (mis)configuration of the problematic application seemed suspicious. After checking the permissions of service accounts involved in the crawling process (not the cause), and after comparing the settings between the apps (not the cause as well) – the problem was in the crawl rules set up for this content source. The option for crawling complex URLs hasn’t been activated for the subdomain URL we wanted to crawl. Enabling the “Crawl complex URLs (URLs that contain a question mark (?))” option under Shared Services Administration: SSP > Search Administration > Crawl rules > Add or Edit Crawl Rule and starting the full crawl from the beginning solves the problem.

But still the question was, why the non-complex, normal URLs could not be crawled by the service. The cause was in our IIS configuration, which is globally set up to automatically detect cookie mode for session state. This results in appending a query string parameter to the URL at first request. So that the URL looks similar to this: http://www.ourdomain.com/index.html?AspxAutoDetectCookieSupport=1 .

Now it seems pretty clear why the crawler without the rule mentioned before had problems. It failed at the first request to the root URL, since the rule has not been met. Hence, it could not continue crawling and left the index empty with the error/warning message.

Hope this helps,
Łukasz