Showing posts with label processing. Show all posts
Showing posts with label processing. Show all posts

Thursday, March 29, 2012

cursor vs. select

buddies,
situation: a processing must take place on every row of a table, and
output results to another table, that can't be done via an insert
into..select query (let's assume that it's not possible for now).
There're 2 solutions I have in mind:
1) open a cursor and cycle through each row (The table can have up to
1M rows)
2) create a clustered index (i.e on an identity column) then have a
loop like:
declare @.i int, @.rows int,
@.col1 varchar(20), @.col2 varchar(20),... @.coln varchar(20),
@.outval1 varchar(20),... -- output values
select @.i=1, @.rows = max(xid) from tblname -- xid is clustered indexed
while (@.i<=@.rows)
begin
select @.col1 = col1, @.col2 = col2,...@.coln = coln
from tblname
where xid = i
-- do the processing on the variables
-- then insert results to another table
set @.i = @.i+1
end
I'd like to know your ideas of which one would be more efficient. Any
other solutions are much appreciated
thanks,
TamyYou obviously skipped the essential question: "Is there a better way
than processing the data a row at a time?". 99.99% of the time the
answer is YES.

On those other occassions, it may depend on what you are doing with the
data but there probably isn't much to choose between the two
approaches. Lots of times a cursor IS the best way to process
row-by-row. For 1 million rows, though, I doubt it's even worth
considering doing it row-by-row. What makes you think that's the only
way?

> Any other solutions are much appreciated

For that we'll need a better spec so that we can write the INSERT
statement:
http://www.aspfaq.com/etiquette.asp?id=5006

--
David Portas
SQL Server MVP
--|||>> processing must take place on every row of a table, and output
results to another table, that can't be done via an insert into..select
query (let's assume that it's not possible for now). <<

That is a **huge** leap of faith and goes against **decades** of SQL
programming experience. It also goes against some proofs in computer
science that a declarative language has the computational power of a
procedural language.

But assuming that the goal is to slow down and hurt the company as much
as possible while mis-using SQL, the cursor will be faster than the
proprietary, non-relational procedural approach. Look at the number of
passes over the base tables and time wasted on indexing.

On the other hand, if you would like to actually post the problem to
get a solution which will run order of magnitude faster instead of
asking for kludges, then you can be better helped here. I feel I like
I just got an email asking for help committing suicide.|||Haha,
chill out guys, I'm on your side. The thing is I don't have access to
the code that runs on every row (decrypting it is another story - I'd
be fired then). The no. 2) runs in 30' for 1M rows - it seems I gotta
be cool with it.|||On 8 Apr 2005 08:58:49 -0700, vuht2000 wrote:

>Haha,
>chill out guys, I'm on your side. The thing is I don't have access to
>the code that runs on every row (decrypting it is another story - I'd
>be fired then). The no. 2) runs in 30' for 1M rows - it seems I gotta
>be cool with it.

Hi Tamy,

I guess that being stuck with a stored proc that works only row-based
and that you can't legally change or replace counts as a valid reason
for using cursor-based logic.

Your solution #2 has one flaw: it assumes that all identity values will
be a contiguous range. That can't be guaranteed, though - there might be
gaps. You'll have to adapt the code to handle those.

I expect that using a cursor will be faster - IF you use the correct
settings (FAST_FORWARD comes to mind), make sure that tempdb is on a
fast disk, and have your source table properly indexed. But the only way
to know for sure which method is the fastest is to test them both, in
your environment and with your data. Comment the call to the stored proc
and the insert statement to test just the speed of then row-by-row
processing.

Best, Hugo
--

(Remove _NO_ and _SPAM_ to get my e-mail address)

Cursor Tuning Help!

This cursor is processing one row per second, all it is doing is appending a value to a column. There are 847,000 rows and I can't leave the query to run for 10 days! help!

CREATE procedure [dbo].sp_USR_TEMP_SESSIONAL_ATTENDANCE_APPEND
as

begin

DECLARE @.I_UPN varchar(13)
DECLARE @.I_SURNAME varchar(50)
DECLARE @.I_FORENAME varchar(50)
DECLARE @.I_DOB datetime
DECLARE @.I_GENDER varchar(1)
DECLARE @.I_LEA varchar(3)
DECLARE @.I_DFES varchar(4)
DECLARE @.I_ATTEND_YEAR varchar(4)
DECLARE @.I_WEEK_BEGINNING datetime
DECLARE @.I_ATTEND_CODES varchar(14)

declare @.cnt int

set nocount on

declare cur CURSOR

for select upn,
surname,
forename,
dob,
gender,
'353' as lea,
dfes,
attend_year,
week_beginning,
attend_codes
from tmpATTEND_IMPORT

for read only

open cur

fetch from cur into
@.I_UPN,
@.I_SURNAME,
@.I_FORENAME,
@.I_DOB,
@.I_GENDER,
@.I_LEA,
@.I_DFES,
@.I_ATTEND_YEAR,
@.I_WEEK_BEGINNING,
@.I_ATTEND_CODES

truncate table tmpATTEND_IMPORT_APPENDED

while @.@.fetch_status = 0
begin
set @.cnt = (select count(*) as cnt from tmpATTEND_IMPORT_APPENDED
where upn = @.I_UPN
and surname = @.I_SURNAME
and forename = @.I_FORENAME
and dob = @.I_DOB
and gender = @.I_GENDER
and lea = @.I_LEA
and dfes = @.I_DFES
and attend_year = @.I_ATTEND_YEAR)

if @.cnt = 0 insert tmpATTEND_IMPORT_APPENDED(
upn,
surname,
forename,
dob,
gender,
lea,
dfes,
attend_year,
week_beginning,
attend_codes)
values (
@.I_UPN,
@.I_SURNAME,
@.I_FORENAME,
@.I_DOB,
@.I_GENDER,
@.I_LEA,
@.I_DFES,
@.I_ATTEND_YEAR,
@.I_WEEK_BEGINNING,
@.I_ATTEND_CODES)

else update tmpATTEND_IMPORT_APPENDED
set attend_codes = attend_codes+@.I_ATTEND_CODES
where upn = @.I_UPN and
surname = @.I_SURNAME and
forename = @.I_FORENAME and
dob = @.I_DOB and
gender = @.I_GENDER and
lea = @.I_LEA and
dfes = @.I_DFES and
attend_year = @.I_ATTEND_YEAR

fetch next from cur into
@.I_UPN,
@.I_SURNAME,
@.I_FORENAME,
@.I_DOB,
@.I_GENDER,
@.I_LEA,
@.I_DFES,
@.I_ATTEND_YEAR,
@.I_WEEK_BEGINNING,
@.I_ATTEND_CODES

end

close cur

deallocate cur

end
GO

I tried adapting a post resolved on Romanian SQL Server User Group (www.sqlserver.ro) , to concatenate the columns

DECLARE @.MyTable TABLE (ID INT PRIMARY KEY, VAL VARCHAR(64))

INSERT INTO @.MyTable VALUES (1, 'Cristian')

INSERT INTO @.MyTable VALUES (2, 'Sorin')

INSERT INTO @.MyTable VALUES (3, 'Narcis')

DECLARE @.Lista VARCHAR(MAX)

SET @.Lista = '';

SELECT @.Lista=@.Lista + VAL + ' ' FROM @.MyTable

SELECT RTRIM(@.Lista) AS Lista

GO

-- Output

Lista

Cristian Sorin Narcis

so try the following procedure :

CREATE procedure [dbo].sp_USR_TEMP_SESSIONAL_ATTENDANCE_APPEND
as

begin

DECLARE @.I_UPN varchar(13)
DECLARE @.I_SURNAME varchar(50)
DECLARE @.I_FORENAME varchar(50)
DECLARE @.I_DOB datetime
DECLARE @.I_GENDER varchar(1)
DECLARE @.I_LEA varchar(3)
DECLARE @.I_DFES varchar(4)
DECLARE @.I_ATTEND_YEAR varchar(4)
DECLARE @.I_WEEK_BEGINNING datetime
DECLARE @.I_ATTEND_CODES varchar(14)


set nocount on

declare cur CURSOR

//select only distinct rows

for select upn,
surname,
forename,
dob,
gender,
'353' as lea,
dfes,
attend_year,
week_beginning,
from tmpATTEND_IMPORT
group by upn,surname,forename,dob,gender,lea,dfes,attend_year

for read only

open cur

fetch from cur into
@.I_UPN,
@.I_SURNAME,
@.I_FORENAME,
@.I_DOB,
@.I_GENDER,
@.I_LEA,
@.I_DFES,
@.I_ATTEND_YEAR,
@.I_WEEK_BEGINNING,


truncate table tmpATTEND_IMPORT_APPENDED

while @.@.fetch_status = 0
begin
//adding to @.I_ATTEND_CODES only rows that respect where clause
select @.I_ATTEND_CODES=attend_codes+@.I_ATTEND_CODES
from tmpATTEND_IMPORT
where upn = @.I_UPN and
surname = @.I_SURNAME and
forename = @.I_FORENAME and
dob = @.I_DOB and
gender = @.I_GENDER and
lea = @.I_LEA and
dfes = @.I_DFES and
attend_year = @.I_ATTEND_YEAR

// and insert what it find and add

insert tmpATTEND_IMPORT_APPENDED(
upn,
surname,
forename,
dob,
gender,
lea,
dfes,
attend_year,
week_beginning,
attend_codes)
values (
@.I_UPN,
@.I_SURNAME,
@.I_FORENAME,
@.I_DOB,
@.I_GENDER,
@.I_LEA,
@.I_DFES,
@.I_ATTEND_YEAR,
@.I_WEEK_BEGINNING,
@.I_ATTEND_CODES)

fetch next from cur into
@.I_UPN,
@.I_SURNAME,
@.I_FORENAME,
@.I_DOB,
@.I_GENDER,
@.I_LEA,
@.I_DFES,
@.I_ATTEND_YEAR,
@.I_WEEK_BEGINNING,


end

close cur

Thursday, March 22, 2012

Cursor alternative

Does anybody have a way to re-write this without using a cursor? This takes a
long time to finish. The goal is to cut the processing time.
DECLARE @.the_emp_id nvarchar(50)
Begin
Declare EmpCursor Cursor For
SELECT Employees.dbo.tbl_emplist.emp_id
FROM Employees.dbo.tbl_emplist WHERE emp_age = 55
FOR READ ONLY
end
Open EmpCursor
While(0=0) Begin
Fetch Next From EmpCursor Into @.the_emp_id
If(@.@.Fetch_Status <> 0) Break
--execute this Stored Procedures
EXEC usp_calculate_retirement @.the_emp_id
EXEC usp_calculate_benefits @.the_emp_id
EXEC usp_calculate_vacation @.the_emp_id
EXEC usp_calculate_bonuses @.the_emp_id
EXEC usp_calculate_promotion @.the_emp_id
End
Close EmpCursor
Deallocate EmpCursor
Help highly appreciated.What you have to do is look at the functionality of each of the procs that
you exec. By the looks of it, you're doing work for just one employee at a
time in those procs. You need to fuse that functionality with the SELECT on
which you declared your cursor. You can do an UPDATE with a JOIN, for
example.
--
Tom
----
Thomas A. Moreau, BSc, PhD, MCSE, MCDBA, MCITP, MCTS
SQL Server MVP
Toronto, ON Canada
https://mvp.support.microsoft.com/profile/Tom.Moreau
"morphius" <morphius@.discussions.microsoft.com> wrote in message
news:92F8D983-F31D-41DB-A2AE-DBB28F545C1C@.microsoft.com...
Does anybody have a way to re-write this without using a cursor? This takes
a
long time to finish. The goal is to cut the processing time.
DECLARE @.the_emp_id nvarchar(50)
Begin
Declare EmpCursor Cursor For
SELECT Employees.dbo.tbl_emplist.emp_id
FROM Employees.dbo.tbl_emplist WHERE emp_age = 55
FOR READ ONLY
end
Open EmpCursor
While(0=0) Begin
Fetch Next From EmpCursor Into @.the_emp_id
If(@.@.Fetch_Status <> 0) Break
--execute this Stored Procedures
EXEC usp_calculate_retirement @.the_emp_id
EXEC usp_calculate_benefits @.the_emp_id
EXEC usp_calculate_vacation @.the_emp_id
EXEC usp_calculate_bonuses @.the_emp_id
EXEC usp_calculate_promotion @.the_emp_id
End
Close EmpCursor
Deallocate EmpCursor
Help highly appreciated.|||Rewrite each of the sprocs to do their work using set-based logic rather
than processing one employee at a time. Since you call each of them with a
cursor, I bet you prolly have some cursors inside the sprocs too. Classic
mistake when developing database applications. You are not alone! :-)
The problem is that shifting your mindset to set-based from row-based can be
a VERY difficult undertaking. The two are not even in the same solar
system.
Read some beginning Transact SQL books. Hire a consultant/mentor to fix
your stuff and learn from him/her at the same time.
--
TheSQLGuru
President
Indicium Resources, Inc.
"morphius" <morphius@.discussions.microsoft.com> wrote in message
news:92F8D983-F31D-41DB-A2AE-DBB28F545C1C@.microsoft.com...
> Does anybody have a way to re-write this without using a cursor? This
> takes a
> long time to finish. The goal is to cut the processing time.
> DECLARE @.the_emp_id nvarchar(50)
> Begin
> Declare EmpCursor Cursor For
> SELECT Employees.dbo.tbl_emplist.emp_id
> FROM Employees.dbo.tbl_emplist WHERE emp_age = 55
> FOR READ ONLY
> end
> Open EmpCursor
> While(0=0) Begin
> Fetch Next From EmpCursor Into @.the_emp_id
> If(@.@.Fetch_Status <> 0) Break
> --execute this Stored Procedures
> EXEC usp_calculate_retirement @.the_emp_id
> EXEC usp_calculate_benefits @.the_emp_id
> EXEC usp_calculate_vacation @.the_emp_id
> EXEC usp_calculate_bonuses @.the_emp_id
> EXEC usp_calculate_promotion @.the_emp_id
> End
> Close EmpCursor
> Deallocate EmpCursor
> Help highly appreciated.
>|||On Aug 31, 7:32 pm, morphius <morph...@.discussions.microsoft.com>
wrote:
> Does anybody have a way to re-write this without using a cursor? This takes a
> long time to finish. The goal is to cut the processing time.
> DECLARE @.the_emp_id nvarchar(50)
> Begin
> Declare EmpCursor Cursor For
> SELECT Employees.dbo.tbl_emplist.emp_id
> FROM Employees.dbo.tbl_emplist WHERE emp_age = 55
> FOR READ ONLY
> end
> Open EmpCursor
> While(0=0) Begin
> Fetch Next From EmpCursor Into @.the_emp_id
> If(@.@.Fetch_Status <> 0) Break
> --execute this Stored Procedures
> EXEC usp_calculate_retirement @.the_emp_id
> EXEC usp_calculate_benefits @.the_emp_id
> EXEC usp_calculate_vacation @.the_emp_id
> EXEC usp_calculate_bonuses @.the_emp_id
> EXEC usp_calculate_promotion @.the_emp_id
> End
> Close EmpCursor
> Deallocate EmpCursor
> Help highly appreciated.
create procedure calculate as
begin
exec usp_calculate_retirement
exec usp_calculate_benefits
exec usp_calculate_vacation
exec usp_calculate_bonuses
exec usp_calculate_promotion
end
for each process you now have to handle all employees.

Thursday, March 8, 2012

Cumulative hotfix package (build 2153)

I was wondering where I'd find specific information on the individual fixes,
specifically bug 447: Processing performance on multiprocessor computers is
somewhat slower than expected.
I've searched for info on each of the fixes listed, but so far have not
found any details.
Thanks very much,
TomTHello,
Unfortunately Microsoft Knowledge Base articles that discuss those fixes
havent been released yet. If we have any update on this, we will let you
know. As I know this bug is regarding some issues of read locks between
processors that may bring overhead.
Best Regards,
Peter Yang
MCSE2000/2003, MCSA, MCDBA
Microsoft Online Partner Support
When responding to posts, please "Reply to Group" via your newsreader so
that others may learn and benefit from your issue.
========================================
=============
This posting is provided "AS IS" with no warranties, and confers no rights.|||Thanks Peter - I'll keep an eye out for any new kb's on these fixes.
""privatenews"" wrote:

> Hello,
> Unfortunately Microsoft Knowledge Base articles that discuss those fixes
> haven’t been released yet. If we have any update on this, we will let yo
u
> know. As I know this bug is regarding some issues of read locks between
> processors that may bring overhead.
> Best Regards,
> Peter Yang
> MCSE2000/2003, MCSA, MCDBA
> Microsoft Online Partner Support
> When responding to posts, please "Reply to Group" via your newsreader so
> that others may learn and benefit from your issue.
> ========================================
=============
>
> This posting is provided "AS IS" with no warranties, and confers no rights
.
>
>

Cumulative hotfix package (build 2153)

I was wondering where I'd find specific information on the individual fixes,
specifically bug 447: Processing performance on multiprocessor computers is
somewhat slower than expected.
I've searched for info on each of the fixes listed, but so far have not
found any details.
Thanks very much,
TomTHello,
Unfortunately Microsoft Knowledge Base articles that discuss those fixes
haven?t been released yet. If we have any update on this, we will let you
know. As I know this bug is regarding some issues of read locks between
processors that may bring overhead.
Best Regards,
Peter Yang
MCSE2000/2003, MCSA, MCDBA
Microsoft Online Partner Support
When responding to posts, please "Reply to Group" via your newsreader so
that others may learn and benefit from your issue.
=====================================================
This posting is provided "AS IS" with no warranties, and confers no rights.|||Thanks Peter - I'll keep an eye out for any new kb's on these fixes.
""privatenews"" wrote:
> Hello,
> Unfortunately Microsoft Knowledge Base articles that discuss those fixes
> havenâ't been released yet. If we have any update on this, we will let you
> know. As I know this bug is regarding some issues of read locks between
> processors that may bring overhead.
> Best Regards,
> Peter Yang
> MCSE2000/2003, MCSA, MCDBA
> Microsoft Online Partner Support
> When responding to posts, please "Reply to Group" via your newsreader so
> that others may learn and benefit from your issue.
> =====================================================>
> This posting is provided "AS IS" with no warranties, and confers no rights.
>
>

Cubes process error

Hi all!

After processing my cubes I have this error: Server error : Process error [object doesn't exist] 'Partner' ; ?

Someone can help me?
Thanks!

Try to do this - go to Business Intelligence development studio, open your data source view and then on the diagram do right mouse click choose option "Refresh...". This will tell you what changes are between what you have in SSAS and what is really in database. It looks like one of your dimensions or measure groups is referencing table that does not more exists in source database.

Vidas Matelis

My Blog: http://www.ssas-info.com/content/blogcategory/14/36/

Cubes not available after processing

The problem is that the cube is not available for queries. But process structure has been made, a full process of the dimensions and of the measure groups.

This is what I did do:
I did Import the SSAS Project from my live server to my local machine (laptop). I did some changings in the project in the dimensions and measure groups and deployed the solution to the server. After this I did a process structure. I logged into Analysis Server, opened a mdx script and the cube was available. Then I did a FULL Process of all the dimensions to be sure that everything is allright with them. Then I did reprocess all my Measure Groups.

Now the cube is not available. This happend the second time now. I process the cube by using ascmd.exe executing xmla commands.
After I do a process structure again it works. I can see the cube and the measure data is available after processing the measure groups.

I cannot explain this behaviour. Can you?

Best regards,
Stefoon

What happens is that when you do a full process of a dimension. The structures of any related cubes become "unprocessed". Processing a measure group under the cube does not implicitly re-process the structure of the cube.

After you do a full process of a dimension you will need to reprocess the structure again.

|||

Hi,

I think you are right. I did forget about the fact that the full process destroys the maps auf the dimensions for the cube construct.

Best regards,

Stefoon

Cube Processing with IS packages

Hi,

Just have a quick question regarding the cube processing.

Currently, I've created IS package for the cube processing for better controls.

Before processing the cube, if I have to update the dimension first (since dimension updates everyday and it gives an error if I process the cube itself without dimension updates) then I have to create data flow 'dimension processiong' inside the AS processiong control task?

If so, I need to link the dimension table and connect columns into the dimension in cubes and what else I need to think about before creating this?

I appreciate if anyone can give a suggestion.

Thanks.

Hello! In the starschema in the realtional source you should have foreign keys relations from the dimension tables to the fact table.

In the data source view, in the BIDS cube project, you will need to relate these tables once again, if this is not done automatically.

Your deployed cube is only dependent on the data source when you process the cube, if it is a MOLAP cube.

After you have updated your relational tables(dimensions and fact table) you will only need to process the cube dimensions first and the cube(or measure group) after that.

HTH

Thomas Ivarsson

|||

HI,

Thanks for your reply.

I've already made the relationship between Facts and dimensions in DSV and also checked Dimension usage in each cubes which are molap.

When I process the cube in SSAS, In object list, I need to check dimension update and cube process and it does not give an error but if not, it gives an error ( like attribute can not be found since the dimensions are not updated). My question is that in SSIS I have to check dimension when I run a package but is there anyway to process automatically instead of checking dimension whenever I process the cube?

I appreciate if you can give me any comments.

Thanks.

|||

Hello! I assume that you have also included the processing of dimensions in your SSIS package and before the cube?

In the process cube form in BIDS you have a buttom att the bottom(advanced?) and there you have a setting like process affected objects that you can activate.

HTH

Thomas Ivarsson

Wednesday, March 7, 2012

cube will not process after SP2 upgrade

Hi,

I installed SP2 two days ago and a cube that had not changed will not complete its processing in what I consider a reasonable time; I've waited up to three hours before I kill the process. I went back in to business intelligence development studio and found that I had some redundant relation attributes and fixed them. I deployed each dimension individually, and attempted to process the cube. It gets to the "processing cube" line where it runs a the query extracting the data from the ROLAP fact table and just sits there. The msmdsrv.exe process is busy but sqlservr.exe is not. I can't tell if anything is happening. Does anyone have any suggestions?

Regards,

Trent

Getting exactly the same problem. Applied SP2 and the cube starts processing but never completes with no obvious errors. Desperate for a solution.....

cube will not process after SP2 upgrade

Hi,

I installed SP2 two days ago and a cube that had not changed will not complete its processing in what I consider a reasonable time; I've waited up to three hours before I kill the process. I went back in to business intelligence development studio and found that I had some redundant relation attributes and fixed them. I deployed each dimension individually, and attempted to process the cube. It gets to the "processing cube" line where it runs a the query extracting the data from the ROLAP fact table and just sits there. The msmdsrv.exe process is busy but sqlservr.exe is not. I can't tell if anything is happening. Does anyone have any suggestions?

Regards,

Trent

Getting exactly the same problem. Applied SP2 and the cube starts processing but never completes with no obvious errors. Desperate for a solution.....

Cube Processing with IS packages

Hi,

Just have a quick question regarding the cube processing.

Currently, I've created IS package for the cube processing for better controls.

Before processing the cube, if I have to update the dimension first (since dimension updates everyday and it gives an error if I process the cube itself without dimension updates) then I have to create data flow 'dimension processiong' inside the AS processiong control task?

If so, I need to link the dimension table and connect columns into the dimension in cubes and what else I need to think about before creating this?

I appreciate if anyone can give a suggestion.

Thanks.

Hello! In the starschema in the realtional source you should have foreign keys relations from the dimension tables to the fact table.

In the data source view, in the BIDS cube project, you will need to relate these tables once again, if this is not done automatically.

Your deployed cube is only dependent on the data source when you process the cube, if it is a MOLAP cube.

After you have updated your relational tables(dimensions and fact table) you will only need to process the cube dimensions first and the cube(or measure group) after that.

HTH

Thomas Ivarsson

|||

HI,

Thanks for your reply.

I've already made the relationship between Facts and dimensions in DSV and also checked Dimension usage in each cubes which are molap.

When I process the cube in SSAS, In object list, I need to check dimension update and cube process and it does not give an error but if not, it gives an error ( like attribute can not be found since the dimensions are not updated). My question is that in SSIS I have to check dimension when I run a package but is there anyway to process automatically instead of checking dimension whenever I process the cube?

I appreciate if you can give me any comments.

Thanks.

|||

Hello! I assume that you have also included the processing of dimensions in your SSIS package and before the cube?

In the process cube form in BIDS you have a buttom att the bottom(advanced?) and there you have a setting like process affected objects that you can activate.

HTH

Thomas Ivarsson

Cube Processing with IS packages

Hi,

Just have a quick question regarding the cube processing.

Currently, I've created IS package for the cube processing for better controls.

Before processing the cube, if I have to update the dimension first (since dimension updates everyday and it gives an error if I process the cube itself without dimension updates) then I have to create data flow 'dimension processiong' inside the AS processiong control task?

If so, I need to link the dimension table and connect columns into the dimension in cubes and what else I need to think about before creating this?

I appreciate if anyone can give a suggestion.

Thanks.

Hello! In the starschema in the realtional source you should have foreign keys relations from the dimension tables to the fact table.

In the data source view, in the BIDS cube project, you will need to relate these tables once again, if this is not done automatically.

Your deployed cube is only dependent on the data source when you process the cube, if it is a MOLAP cube.

After you have updated your relational tables(dimensions and fact table) you will only need to process the cube dimensions first and the cube(or measure group) after that.

HTH

Thomas Ivarsson

|||

HI,

Thanks for your reply.

I've already made the relationship between Facts and dimensions in DSV and also checked Dimension usage in each cubes which are molap.

When I process the cube in SSAS, In object list, I need to check dimension update and cube process and it does not give an error but if not, it gives an error ( like attribute can not be found since the dimensions are not updated). My question is that in SSIS I have to check dimension when I run a package but is there anyway to process automatically instead of checking dimension whenever I process the cube?

I appreciate if you can give me any comments.

Thanks.

|||

Hello! I assume that you have also included the processing of dimensions in your SSIS package and before the cube?

In the process cube form in BIDS you have a buttom att the bottom(advanced?) and there you have a setting like process affected objects that you can activate.

HTH

Thomas Ivarsson

Cube Processing Issue

We are trying to process the Cube. First time when we processed the cube it took around 1 and 1/2 hours to do full process. But after some time in the same environment its taking more then 6 hours to process. As such there was no change in the environment. We have 64 bit 2 way itinium processor with 32 GB of ram. The size of the cube is aroung 150 MB. We are using the OLE DB2 provider to connect to database.Any environmental issues that needs to be checked.Any thoughts on this issue.Appriciate your help.

Thnaks,

Really hard to say what is going on.

Many factors could be affecting processing perofrmance.

I would first check the performance of relational database. See what is the rate of DB2 delivering the data. 150 meg of data should take way faster to process for Analysis Server.

Edward Melomed.
--
This posting is provided "AS IS" with no warranties, and confers no rights.

|||

There does not seems to be any issue with Relational Database side .If I ran the same cube from machine its peocessing very fast.Any factor in analysis box that that can count for slow processing .

Analytical box is 64bit 2 way processor with 64 GB ram.

Thnaks,

|||

Try and trace command AS sends to relational database. And then see what rate your 64bit box can get data from relational database

Again, it is had to say without looking at the cube. Try and contact Customer Support Service and have support engineer to look at your situation.

Edward Melomed.
--
This posting is provided "AS IS" with no warranties, and confers no rights.

Cube processing error: "The end of the input was reached"

I created a cube with AMO and examined it in BI Studio by importing. Everything looks fine, but I get the following error when processing the cube:

"Parser: The end of the input was reached".

The processing of dimensions appears to be successful. SSAS processes the cube immediately after finishing dimensions. That's where the error occurs. What is odd is that the process progress dialog box shows everything was completely successfully. I am quite sure that the processing of the cube was not completed because I cannot browse it and I know it takes much more than a duratio of 0 seconds to process the cube.

Could anyone offer some hint?

Thanks,

hz

It looks like the script you have for your cube contains syntax error. At least it should have "CALCULATE;" statement.
|||

I did not write the script. I wrote code in C# using AMO to create the cube. I do not know how to look at the script, but scripted the cube to a query editor window and did not see any CALCULATE statement. I also scripted a cube that is working perfectly and did not see CALCULATE in its script either.

At this point of time, I am temporarily giving up using AMO to create cubes and resorting to other means.

hz

Cube Processing Error - Urgent - Thanks

First when I processed cube's (Process Data) using SSIS package I got "Memory error"

"Memory error: Allocation failure : Not enough storage is available to process this command."

I changed "OLAP\ Process \ BufferMemoryLimit" to 30% . I have /3GB setup. and I am using AS 2005 - 32 bit.

When I rerun the package I got olap storage engine error

Errors in the OLAP storage engine: An error occurred while processing the 'Scenario Analysis 2006' partition of the 'Scenario Analysis' measure group for the 'Scenario Analysis' cube from the Risk Reports database.

I don't know what's that, so I run cube process (Full) from Management Studio and got error below while reading data process stoped with following popup message

The trace has been stopped
Unexpected end of the file has occurred. The following elements are not closed: root, return, ExecuteResponse, soap:Body soap: Envelope. Line 13180, position 1. (System.Xml)

Thanks for any help - Ashok

Please check if you're running out of disk space...

Cube Processing Error - Urgent - Thanks

First when I processed cube's (Process Data) using SSIS package I got "Memory error"

"Memory error: Allocation failure : Not enough storage is available to process this command."

I changed "OLAP\ Process \ BufferMemoryLimit" to 30% . I have /3GB setup. and I am using AS 2005 - 32 bit.

When I rerun the package I got olap storage engine error

Errors in the OLAP storage engine: An error occurred while processing the 'Scenario Analysis 2006' partition of the 'Scenario Analysis' measure group for the 'Scenario Analysis' cube from the Risk Reports database.

I don't know what's that, so I run cube process (Full) from Management Studio and got error below while reading data process stoped with following popup message

The trace has been stopped
Unexpected end of the file has occurred. The following elements are not closed: root, return, ExecuteResponse, soap:Body soap: Envelope. Line 13180, position 1. (System.Xml)

Thanks for any help - Ashok

Please check if you're running out of disk space...

cube processing error - perhaps someone can explain

I have one fact table that is also a dimension table + one other dimension table.

When I deployed the cube I was getting an key attribute missing error.

I went back and did a "Process Full" on the fact diminsion and then the cube would

deploy without the error.

Should a fact dimension always be set to process full?

thanks

What could have happen here is:

You've processed your dimension earlier, and then you got some more data inserted into your table. Processing of cube will figure out that dimension is already processed and will only start processing of partitions. The partition processing will see new members that came with new data and will complain during processing.

You dont have to use ProcessFull for your dimensions to keep them in sync. You can run ProcessUpdate for dimensions. But if processing of your dimensions and the cube doesnt take long, I would say do the ProcessFull.

Here is whitepaper that gives bit more details about processing: http://msdn.microsoft.com/library/default.asp?url=/library/en-us/dnsql90/html/sql2k5_asprocarch.asp

Hope that helps.

Edward.
--
This posting is provided "AS IS" with no warranties, and confers no rights.

|||

I went in and changed the process options for the dimension to 'process full'

however it always defaults back to 'process update'

is there a way to lock it on process full?

|||

Click on the "script" button in your processing dialog to get XMLA processing command generated.

You can send same XMLA command to process your dimension and not to use ProcessingDialog.

Edward.
--
This posting is provided "AS IS" with no warranties, and confers no rights.

Cube Processing Error - OLE DB error: OLE DB or ODBC error: Class not registered.

Problem

I get the above error when trying to process the cube.

Environment

Server OS - Windows 2003 64Bit

SQL Server 2005 64Bit

Database - Teradata

Connectivity - OLE DB for Teradata

Testing

- Have successfully tested OLE DB connectivity using Hyperion, connecting using an OLE DB Connection file.

- Get the same error when trying to process a sample MS Access database or Teradata

- Uninstalled and reinstalled SQL Server 2005

If anyone has any idea’s it would be greatly appreciated, thanks.

It might be that the provider (in the connection string of the datasource) is set to the 32bit version (if you used BI Development Studio or SQL Management Studio on a client 32bit machine). Then you deploy and, during processing, the AS2005 server tries to instantiate the provider and fails (because you have the 64bit version of the provider installed on the server machine).

Adrian

|||

Thanks,

I used Visual Studio 2005 to develop a Test Cube on the Server after the installation.

Back Ground

I originally installed SQL Server 2005 using a Local Account but it was then decided that it was best done under a Domain account (Up until this point I could process AS2005 cubes). So I uninstalled SQL Server 2005 and reinstalled using the domain account.

I think somewhere along the way a Registry File has become corrupt. Is there any way to test this?

Next Step

If I am unable to resolve this issue I will re-image the server and hopefully this will remove any corrupt Registry files.

Any other Idea's?

Thanks

Cube Processing Error

Hello,
I'm getting an error while processing some cubes. "Cannot lock object
'lazy processing token'. Error: Connection to the server is lost." This
only occurs while processing certain cubes on the 64-bit version of AS
and occurs just after the 'Writing Aggregations and indexes' portion of
processing.
Any recommendations or ideas would be appreciated.
-Justin
Justin,
I am having the same problem. Let me know if you were able to find a solution.
Luke

Quote:

Originally posted by Justin
Hello,
I'm getting an error while processing some cubes. "Cannot lock object
'lazy processing token'. Error: Connection to the server is lost." This
only occurs while processing certain cubes on the 64-bit version of AS
and occurs just after the 'Writing Aggregations and indexes' portion of
processing.
Any recommendations or ideas would be appreciated.
-Justin

|||

Quote:

Originally posted by Justin
Hello,
We also have same problem. "Cannot lock object
'lazy processing token'. Error: Connection to the server is lost." We are using separate servers, one for sql (64-bit MS SQL server on Win2003-Microsoft SQL Server 2000 - 8.00.760 (Intel IA-64)
Feb 6 2003 16:07:24
Copyright (c) 1988-2003 Microsoft Corporation
Enterprise Edition (64-bit) on Windows NT 5.2 (Build 3790: ) and another for AS server (64-bit AS on Win2003-Microsoft SQL Server Analysis Services
Microsoft Corporation Version: 8.00.382). And we are processing the cubes from 64-bit AS. Some of the cubes fail (not all of them). When we process the cubes from another 32-bit AS it doesn't fail. Do you have the similar system configuration?
When we redesigned the cube, which failing on 64-bit AS, did not fail..
-Selma

|||Our problem was most likely related to a firewall setting as described in an earlier post. The cube could process locally, or from a DTS package, but not remotely. I found out our firewall kills inactive connections after 30 minutes.
The cube processing only fails at the last second, after it has been giving status for the entire process. My guess is there are two connections opened; one is used to show status messages and is active throughout the process; not sure what the other one is used for, but it is inactive until the commit, which explains why our firewall is dropping the connection - and then the commit fails.
Luke

Quote:

Originally posted by Selma

Cube Processing Error

Hi,
I am trying to Process a analysis server Cube and I am getting an error message saying
syntax error converting the varchar value A.H to column of type int.; 22018.
Can somebody tell me how to remove this error.
Thanks,
PraveenClean up the data? It is telling you that you are trying to convert a string that contains "A.H" to an int, and not even my creative interpretations of literals can manage that feat!

-PatP|||Thanks for the reply.Its solved now.